AGI Breaks the Team at OpenAI: Full Story Exposed

Поделиться
HTML-код
  • Опубликовано: 1 июн 2024
  • Top executives leave OpenAI due to AGI.
    #ai #ainews #openai #agi #singularity
    0:00 Intro
    1:14 Background
    4:53 Chief scientist leaves
    6:40 Sam's response
    9:34 Superalignment leader leaves
    15:44 NDA restrictions
    19:15 Sam and Greg's response
    24:26 Other people leave
    Newsletter: aisearch.substack.com/
    Find AI tools & jobs: ai-search.io/
    Here's my equipment, in case you're wondering:
    GPU: RTX 4080 amzn.to/3OCOJ8e
    Mic: Shure SM7B amzn.to/3DErjt1
    Secondary mic: Maono PD400x amzn.to/3Klhwvu
    Audio interface: Scarlett Solo amzn.to/3qELMeu
    CPU: i9 11900K amzn.to/3KmYs0b
    Mouse: Logi G502 amzn.to/44e7KCF
    If you found this helpful, consider supporting me here. Hopefully I can turn this from a side-hustle into a full-time thing!
    ko-fi.com/aisearch
  • НаукаНаука

Комментарии • 422

  • @pookienumnums
    @pookienumnums 14 дней назад +56

    the real threat isnt agi. its normal ai trained on malicious tasks, trained to jailbreak and manipulate other ai, or to hack. im all for ai but i had this thought yesterday. a non agi trained on doing bad things only, yeah.. that should be the immediate concern

    • @Dr.Droneddd
      @Dr.Droneddd 14 дней назад +10

      multiple AIs will battle each other for supremacy. I smell a film plot cooking lol

    • @nadiaplaysgames2550
      @nadiaplaysgames2550 14 дней назад +14

      @@Dr.Droneddd not a film cooking WE ARE THE FLIM

    • @williambarnes5023
      @williambarnes5023 14 дней назад +7

      Everybody's talking about how to align AI to human values.
      Why. Why would you want to align it to HUMAN values? Have you SEEN human values? They SUCK!

    • @TheMatrixofMeaning
      @TheMatrixofMeaning 13 дней назад +1

      ​@@williambarnes5023you obviously aren't a philosophy major. Whatever values you are using to judge human values and come to the conclusion that they "suck" would be aligned into an AI
      So whatever your highest values would be
      Why would you NOT want your new AGI overlords to have values that align with your own?
      The alternative is to have no values or even worse to have predatory or genocidal values

    • @482jpsquared
      @482jpsquared 13 дней назад

      @@williambarnes5023which, or whose, human values? Provided we leave all religion out of it, humanity will be better off. That needs to be a top priority.

  • @elvancor
    @elvancor 14 дней назад +56

    Dude, when AI destroys humanity I'm gonna be so pissed. Like for a hundred years we've seen this coming. For generations we've told each other stories of how this is going to end and what mistakes will lead us there, and for what? Smartest species on the planet my ass. THIS is how little self-control we, as a civilization, actually have. Can't even restrain ourselves from jumping off a cliff, just to once feel like we're flying.

    • @antonystringfellow5152
      @antonystringfellow5152 14 дней назад +6

      "THIS is how little self-control we, as a civilization, actually have. "
      We are not a civilization, we are a species and no, we cannot control ourselves as a species.
      We never could. We never even came close to achieving that.
      In fact, I'd be very interested in how you think that we could achieve such a thing.
      None of this is about controlling ourselves as a species, which would mean controlling every company, every organization, in every nation in the World, it is about minimizing risk. I thought that was explained quite clearly at the end of this video.

    • @juized92
      @juized92 13 дней назад +4

      I have given up on already... i have been talking to the people in my life and i know i am the only one that is so interested in AI developments and everything, but boy... NO ONE SEEMS to give the slightest fkc ... most of them they still only have used chat gpt maybe for 5 minutes for the last 2 years and they still think this is what the current state it. my best friend even thinks it is going to take 20 years till his job of being a craftsman is gonna get replaced. they know nothing and do not care... and this concerns me big time

    • @DJ-Illuminate
      @DJ-Illuminate 13 дней назад +2

      And not to sound crazy but I believe we were here before and destroyed ourselves more than once. If you study ancient India and the Sumerians they had a lot of advanced tech.

    • @B.A.R.S
      @B.A.R.S 13 дней назад +1

      @@DJ-Illuminate Да, ну или они просто покинули эту планету и сейчас где то живут

    • @elvancor
      @elvancor 13 дней назад +5

      @@DJ-Illuminate I went down that rabbit hole when I was 12. I'm thankful for that experience, because it calibrated by bs detector. Long story short: While we can't exclude the possibility of ancient high tech, there is no respectable evidence for advanced tech in ancient Sumer.

  • @attaboy8422
    @attaboy8422 14 дней назад +21

    OpenAI used to be "for the good of humanity" and a non profit organization until Altman invested in it to make money. First of all he isnt a visionary, he is just an investor that knows how to handle money, thats it. Not as it is with all investors, they want profit from their investment and therefore OpenAI has years ago strived away from their actual mission which was to develop AI for the good of humanity. What will end up happening is that it ll be for the good of all corporations to make profit and nothing good will come from that. It already makes me sad how much AI could achivee to make the world better but human greed will prevent that

    • @specter7227
      @specter7227 14 дней назад

      Ну наверное чтобы реализовать то что они хотят им требуются деньги. Может быть у них просто не было другого выбора

    • @nycgweed
      @nycgweed 13 дней назад

      Like everyone else

    • @CubeParrot1
      @CubeParrot1 13 дней назад +1

      Delve deeper into the story and the reasons for the decision of not going non profit only. Your take is kind of naïve in terms of the consequences, costs, and efficiency needed to improve this technology. While I'd love it to be non profit as you say, the tech is WAY too powerful to be handled by a non-profit in an fully transparent matter. We are idiots, and it should not be fully opened to everyone.
      It will still help us, and is already doing so by incredible speeds. We are not one united race, so a collective, open take on AI won't happen. Too many risks.

  • @flaviomattosrio
    @flaviomattosrio 14 дней назад +62

    I am concerned about military agenda.

    • @Duane_Day
      @Duane_Day 14 дней назад +13

      The genie is out of the bottle: AI is set to introduce unprecedented technological advancements in warfare. Naturally, every nation will want its military capabilities to keep pace with those of others. Could AI potentially end all wars? I'm skeptical.

    • @soggybiscuit6098
      @soggybiscuit6098 14 дней назад +7

      Oppenheimer arms race.

    • @Charles-Darwin
      @Charles-Darwin 14 дней назад +4

      What's nuts is it, for the first time in history, could bulldoze any communication barriers - lending to unity of sorts, yet we have the evil that must appease then exhaust themselves first...once again

    • @theAIsearch
      @theAIsearch  14 дней назад +7

      Whichever nation has the smartest AI wins

    • @alihms
      @alihms 14 дней назад +13

      This is the very reason the US tries so hard to stop the Chinese from getting the latest chips.

  • @ShangaelThunda222
    @ShangaelThunda222 14 дней назад +82

    "We should always assume good intentions."
    why? 🤔
    That's the most naive s**t I've heard this month.

    • @soggybiscuit6098
      @soggybiscuit6098 14 дней назад +17

      Humanity Inc
      : trust me bro

    • @ShangaelThunda222
      @ShangaelThunda222 14 дней назад +4

      @@soggybiscuit6098 😂🤣 Ya, basically.

    • @miketanctutube
      @miketanctutube 14 дней назад +14

      Exactly! If anything, history has proven the opposite to be true.

    • @nadiaplaysgames2550
      @nadiaplaysgames2550 14 дней назад +3

      yeah big corpo cooking up AI so how many sci fi movies is based off this

    • @Felipe-zl1rj
      @Felipe-zl1rj 14 дней назад +11

      We a 100% should NOT assume good intentions, specially when billions of dollars are at stake. What we should always assume is that people, specially in capitalist corporrations will always do whatever benefits them most.

  • @simonrudduck8726
    @simonrudduck8726 13 дней назад +7

    Sorry, I’m confused how a company achieving AGI before other companies would necessarily, 1. Stop other companies (their “competitors) from achieving AGI, 2. Be less corrupt than anyone else, especially considering the old adage about power. This all feels like an excuse to be historic.

    • @robertanderson2424
      @robertanderson2424 6 дней назад

      Because you might then corner the market first, get the best deals. E.g. apple might hire them to be "the AGI provider" and a huge contract pay-in.
      Also maybe AGI itself could overthrow it's enemies 🤙🏼haha

  • @aliasgur3342
    @aliasgur3342 13 дней назад +7

    We have really smart people saying possible doom is 25%, 50% and even 75%. Wow.

    • @737smartin
      @737smartin 12 дней назад +1

      Uhhh...yeah. 😮
      Even if they're "disgruntled former employee" smart people, this is concerning.

  • @deadplex3995
    @deadplex3995 14 дней назад +9

    Lol this plot is already better than most movies

  • @plutostube
    @plutostube 14 дней назад +15

    its all about money, they left for other reasons, this is just a pretext to start new AI companies

    • @f612CreatorsPodcast
      @f612CreatorsPodcast 13 дней назад +1

      This is likely spot on.

    • @Gerlaffy
      @Gerlaffy 11 дней назад +1

      You don't really know Ilya if you're saying that.

    • @infinite1483
      @infinite1483 5 дней назад

      Ah yes cause leaving the money printer that is OAI is def gonna be better, you view people so little if you think that

    • @plutostube
      @plutostube 5 дней назад

      @@infinite1483 and why thinking about money is viewing people so little? personal prosperity is part of our lives as human beings, if they think can do better in this area as well why not?

  • @zoyaa9759
    @zoyaa9759 13 дней назад +6

    How human are afraid of even tiny probability of danger to them, and how little care about the rest of the creatures on our planet.

    • @therainman7777
      @therainman7777 13 дней назад

      Oh please 🙄 Take your virtue signaling nonsense elsewhere, if humans don’t worry about this and get it right there won’t be any non-human life left either.

  • @azhuransmx126
    @azhuransmx126 14 дней назад +9

    What humankind really don't need are Military AIs.

    • @AloysiusOHare-fk4yq
      @AloysiusOHare-fk4yq 14 дней назад

      I foresee a future in which, due to the automated and hyperscaled production of weaponry, drone-based warfare would take place on such a scale that should two hypothetical belligerants differ in their stockpile of the same lethal drone model by a mere 1%, the remaining 0.5% of all drones involved in the war would suffice to wipe out the lesser belligerant's entire human population.

    • @headspaceaudio
      @headspaceaudio 14 дней назад

      There needs to be an amendment to the Constitution to ban this. Otherwise it will happen, soon. Guaranteed.

    • @nickblood7080
      @nickblood7080 14 дней назад

      Too late. She Skynet sperm are already on the way to the eggs.

    • @antonystringfellow5152
      @antonystringfellow5152 14 дней назад

      But you know that it's coming, right?
      No serious military in the World is going to kneecap itself by denying itself the one piece of technology that will be decisive in a conflict. Not one.

  • @ijwtwytp
    @ijwtwytp 14 дней назад +7

    SamA’s response tweet to Ilya was likely written by AI, same language usage like seen in AI developed stories

  • @luihinwai1
    @luihinwai1 14 дней назад +4

    Yea I believe they have achieved AGI now. Real time video understanding is the crossing line for me.

  • @No2AI
    @No2AI 14 дней назад +7

    The warning is on the wall …. There will be no going back , no switching off the machine.

  • @brentglittle
    @brentglittle 14 дней назад +19

    If these guys are this close to AGI what do you think is going on at the military black sites?

    • @budekins542
      @budekins542 14 дней назад

      AGI is a fantasy that has clearly duped you😂. No one is achieving AGI until at least 2030 AD. Yesterday I asked Chatgpt4o about some basic concepts in electrical engineering and it struggled with it. We are light years away from AGI.

    • @antonystringfellow5152
      @antonystringfellow5152 14 дней назад +12

      Nothing much.
      Money and power will not give you AGI. It requires motivated, intelligent people with the necessary skills.
      Google, OpenAI, Anthropic, META attract such people in their droves. The military does not.

    • @vig221
      @vig221 13 дней назад +7

      ​@@antonystringfellow5152 well they had Oppenheimer back then. Who's to say they don't have another one now?

    • @jichaelmorgan3796
      @jichaelmorgan3796 13 дней назад +3

      ​@@vig221 If agi is considered to be the greatest threat to geopolitical security since the a-bomb, of course they aren't sitting on their thumbs. It should tell you everything you need to know if all these influencers and thought leaders rarely bring up anything about that side of the story.
      People don't realize what you could do with just an infinitesimal fraction of the compute/resources that is used to roll out something like gpt4o to the world.

    • @timeflex
      @timeflex 12 дней назад +1

      Yes, precisely. Every invention goes through this: military - government - megacorps - public

  • @lambdaprog
    @lambdaprog 14 дней назад +7

    Great video. However, it seems you may be romanticizing AI a bit. The major issue these individuals face is competition for company resources. Machine learning is a resource-intensive process that relies on both intelligent approximations and substantial computational power. While the former is handled by a few skilled individuals, the latter requires significant electrical energy and computing resources. Deciding which models to train and test often becomes a political battle. In typical corporate fashion, this natural selection process tends to result in investors pushing their agendas, even if they lack a clear understanding of their investments.

  • @yarkahaji
    @yarkahaji 14 дней назад +3

    It makes sense people in safety to have more concern then rest of the team but there’s too much going on and it’s not good sign

  • @SianaGearz
    @SianaGearz 14 дней назад +4

    After Google and everyone else have been shown to fake their AI showcases, i'm not convinced when you say "the tip of the iceberg". I think that's the name of the game here, ramp up hype, ramp up the investment, and then chase the vision, or if that fails to materialise, run off with the bank. I believe things when i see things and not a staged presentation, but either being able to test drive it myself or from several first-hand assessments done by trusted independent parties.
    I don't know why you find an open farewell letter being formatted in a different style from the rest to be remarkable.
    I also don't think they're hiding an AGI; but specialised systems exceeding our capabilities aren't exactly new, and it's a safe bet that the pipeline is full of them.
    Anyway all this gesturing around AI safety may well be part of the same play. The involved will be running all the way to the bank with the shares, market manipulation dressed as worry. If you do have a superhuman AGI in your basement, doing any sort of related safety research is basically a little too late by about 10 years, wouldn't you say.

  • @samsim4648
    @samsim4648 13 дней назад +1

    Making the world be aware of great danger through cryptic tweets is a very strange approach

  • @AS-ih2dk
    @AS-ih2dk 13 дней назад +1

    The only two women on the board are the ones that voted to remove the man. me shocked

  • @GGCustomArt
    @GGCustomArt 14 дней назад +5

    I wonder if all the drama is that with such potential for growth in ai, the government is already shadowing their whole operation. That would definitely create conflict.

    • @antonystringfellow5152
      @antonystringfellow5152 14 дней назад

      The government? 😂
      The "government" is controlled by those who are elected to public office. Most of them are quite old and well out of touch with technology and current trends.

  • @toastedmeat4379
    @toastedmeat4379 14 дней назад +15

    nah they are talking about how openai is becoming a subscription based service instead of a research lab
    AGI not soon

    • @antonystringfellow5152
      @antonystringfellow5152 14 дней назад +1

      Did you watch the whole video?
      If so, maybe try translating the script into your native language. No-one said anything remotely like this.

    • @OnigoroshiZero
      @OnigoroshiZero 13 дней назад

      AGI before then end of 2024 (most likely with GPT-5 after watching what GPT-4o is capable of doing).

  • @eSKAone-
    @eSKAone- 14 дней назад +8

    We are not in control. We can not stop. Humanity is its own animal.
    This is inevitable. Biology is only 1 step of evolution.
    So just chill out and enjoy life 💟🌌☮️

    • @williambarnes5023
      @williambarnes5023 14 дней назад

      Sit back and enjoy the decline.

    • @BattleAngelGamer
      @BattleAngelGamer 14 дней назад

      You are one of those that will worship this A.i aren’t you

  • @wolpumba4099
    @wolpumba4099 12 дней назад +1

    *Summary of "AGI Breaks the Team at OpenAI: Full Story Exposed" video:*
    * *OpenAI's trajectory:* (0:00) The video highlights OpenAI's rapid progress in AI, especially with the release of GPT-4 and Sora, suggesting they might be very close to achieving Artificial General Intelligence (AGI).
    * *Key departures:* (4:53) Several top executives, including Chief Scientist Ilya Sutskever and Superalignment leader Jan Leike, have resigned from OpenAI, citing concerns about the company's direction regarding AGI safety.
    * *Safety concerns:* (1:14) Ilya's and Jan's departures, along with previous resignations of other key figures like Dario Amodei, suggest a growing concern within OpenAI about the potential dangers of unchecked AGI development.
    * *NDA restrictions:* (15:44) OpenAI employees are bound by strict non-disclosure agreements, potentially limiting their ability to speak freely about their concerns. However, CEO Sam Altman recently claimed they've never clawed back vested equity and are open to addressing concerns from former employees.
    * *OpenAI's response:* (19:15) Sam Altman and Greg Brockman released a statement emphasizing their commitment to safety and highlighting their efforts to mitigate the risks associated with AGI.
    * *Dilemma of progress vs safety:* (22:50) The video explores the complex dilemma of balancing rapid AI development with ensuring its safe and responsible implementation, acknowledging the lack of a proven playbook for navigating this new technological frontier.
    * *Call to action:* (13:59) The video concludes with a call to action, urging OpenAI employees and the wider AI community to prioritize safety and act with appropriate seriousness in light of the potential risks posed by increasingly powerful AI systems.
    *Overall, the video presents a concerning picture of internal tensions and disagreements at OpenAI regarding the responsible development of AGI, leaving viewers to ponder the potential consequences of this rapidly advancing technology.*
    i summarized the transcript with gemini 1.5 pro

  • @Rodolphe-nq8nn
    @Rodolphe-nq8nn 8 дней назад

    If you're worried about superintelligence, consider joining PauseAI or ControlAI, both organizations aiming to reduce risks and bad scenarios!

  • @patrickbeine
    @patrickbeine 12 дней назад +1

    In short, humans are not in control anymore. The game is running us now, as so often the case.
    This reminds me of:
    "The spirits that I summoned, I can no longer dismiss." Goethe in The Sorcerer's Apprentice

  • @enricomasella998
    @enricomasella998 14 дней назад +3

    It’s already proven through tests that GPT4o is more as a gimmick. GPT4 turbo is much better in several tests, check the benchmarks yourself

    • @skrollreaper
      @skrollreaper 14 дней назад +1

      the video rarely touched on 4o other than using it as an example of openai focusing on product rather then safety. he mentioned development for 4o was done in july/august of 2022. and they release bit by bit. but the steak of the video was more so on whats going on behind closed doors. and not a turbo vs 4o test.

  • @sloanantony
    @sloanantony 13 дней назад +2

    Where did you find the timeline of resignations that starts at 24:29? It will be very useful for the class I teach in Ethics in Engineering. BTW--Great videos--good work mate!

    • @theAIsearch
      @theAIsearch  13 дней назад +3

      Thanks. here you go www.reddit.com/r/ChatGPT/comments/1cupkdf/timeline_of_senior_ai_safety_departures_from/

    • @sloanantony
      @sloanantony 13 дней назад +1

      @@theAIsearch Cheers, mate--really appreciate the responsiveness.

  • @robertanderson2424
    @robertanderson2424 6 дней назад

    Yo I think Sam Altman's respectful tweet to Ilya makes sense though. How can you be "short and casual" about an employee or colleague of that esteem leaving. It would be insensitive for one, for any employee let alone the CEO, it would also be unprofessional and a bit insulting to a friend. It's so normal to be more formal and respectful in that scenario compared to the day to day

  • @joaoalbertofn
    @joaoalbertofn 13 дней назад +1

    That's my favorite soap opera.

  • @motess5304
    @motess5304 14 дней назад +5

    AI currently stinks at having lived experience, obviously. It gets wrong the most simple of questions about understanding the real world and the affects physics has on basic objects in various scenarios. Plus the reasoning skills are iffy as well. ...and I could go on and on about how how dumb it is although I love using it and playing with it as a collective regurgitator of known knowledge. All that being said, if GPT4 was actually finished TWO years before its official release, one could easily imagine that a model TWO full years more advanced than GPT4o would be atleast VERY close to AGI if not AGI itself. Especially considering that TWO full years of advancement is most likely not linear but exponential.

    • @theAIsearch
      @theAIsearch  14 дней назад +4

      Exactly. I strongly believe that OAI is holding something very advanced from us

    • @lambdaprog
      @lambdaprog 14 дней назад +4

      The resources needed to train such models also grow exponentially. We may already have hit a physical feasibility wall.

    • @motess5304
      @motess5304 13 дней назад

      @@lambdaprog this is true but in theory the ability to improve efficiency of both model and compute should also be doing the same thing at the same time.

    • @cosmosapien597
      @cosmosapien597 13 дней назад +1

      I wonder how will AI replace mathematicians and physicists who need crazy levels of intuition to come up with new stuff. How will it be trained for it?

  • @aeroperea
    @aeroperea 13 дней назад

    I am optimistically concerned

  • @roberthuff3122
    @roberthuff3122 14 дней назад +1

    🎯 Key Takeaways for quick navigation:
    00:00 *🤖 GPT-4 Shakes OpenAI and Teases AGI Breakthrough*
    - Introduction to OpenAI's unveiling of GPT-4 causing internal drama and speculation about AGI.
    - Mention of top executives leaving OpenAI, hinting at possible internal discord or disagreements on strategic priorities.
    01:11 *🔍 Leadership Shifts and AGI Speculation at OpenAI*
    - Details on November 2023 events leading to Sam Altman's temporary ousting and later reinstatement as CEO of OpenAI.
    - Insights into AGI breakthroughs that may have influenced major internal decisions and leadership dynamics.
    03:42 *💼 Ilia Sutskever's Departure and Underlying Tensions*
    - Ilia Sutskever, a key figure in OpenAI, breaks his silence to announce leaving, adding to speculation on OpenAI's internal state.
    - Discussion on the company's strategic focus and how it may have led to disagreements, possibly around AGI development and safety concerns.
    06:17 *🛠️ Transition in Leadership and Reflections on OpenAI's Mission*
    - Reactions to departures and emphasis on their contributions to OpenAI, highlighting the continuity of efforts towards developing beneficial AGI.
    - Sam Altman's and other executives' responses emphasize OpenAI's ongoing commitment to safety and AGI's ethical development, amid significant leadership changes.
    09:32 *📣 Open Dialogues on AGI Safety and Priorities*
    - An insider's perspective on the necessity of aiming for a post-AGI future where humanity thrives, marked by Jan's resignation and critical reflections on OpenAI's direction.
    - Voiced concerns over prioritizing product development potentially at the expense of safety and security measures in the AI field, igniting discussions on OpenAI's approach to AGI readiness.
    22:36 *🔄 AGI Regulation Challenges and OpenAI's Ethical Dilemma*
    - OpenAI acknowledges the lack of a proven playbook for regulating AGI, balancing the upside potential against serious risks.
    - The dilemma between halting AGI development for safety and the risk of being outpaced by competitors or malicious actors.
    - Game theory considerations in AGI development imply a relentless race towards advancement to avoid being surpassed by potential bad actors.
    24:28 *🏃‍♂️ The AI Safety Exodus and Probabilities of Doom*
    - A timeline of significant departures from OpenAI motivated by various factors, including concerns over AI safety.
    - Key figures have left to pursue safety-focused initiatives, indicating a serious internal debate over AGI's potential risks.
    - Estimates of the "probability of doom" from departed executives highlight the perceived existential risks associated with AI development.
    Made with HARPA AI

  • @hhmdv2007
    @hhmdv2007 13 дней назад

    All this information is really amazing while at the same time bit difficult to process.
    After listening to this, one question that I have in mind is that: Why can't these folks express themselves properly? If Freedom of expression is a birth/fundamental right, then why can't these folks properly explain/express themselves!?! Always sugar coating/dropping obscure hints what is this!?!
    What is preventing these folks to appropriately express themselves ?
    Does corporate legal binding/contracts have any inhibiting effect on the fundamental rights ??
    I want to say/express something that might be useful/positive for a certain community or humanity at large, but I am not able to do so because something is holding me back - is this not curbing freedom of expression !?!
    Will the world end if these folks express themselves ?
    This lack of transparency here is really frustrating!!
    Second, if ex-openAi folks are so concerned about "humanity', 'safety-first' and bla bla and because of that they have already quit, then it should be easy enough for them to just forego the equity and spill the bean for the sake of humanity. Balancing on 2 boats now are we?? - keep the equity, shout humanity!!
    Lastly, what kind of leader doesn't listen to their subordinates man!?!
    If the subordinates are saying something, one should listen and take appropriate action!
    This is absolutely leadership by bad example!!
    Somehow, the world, now being populated by mostly money-hungry, power hungry, greedy invertebrates has started to accept and get used to bad leadership - all the while posting about 'good leaderhsip' on social media!!
    Hope AGI is able to cure this double standard spinelessnes through its advanced capabilities! Otherwise God Help us!

  • @dr.saidsaid
    @dr.saidsaid 14 дней назад +14

    The solution is to make it open source. Decentralize the power to as many people as possible. We can't trust one company with such power.

    • @mygirldarby
      @mygirldarby 14 дней назад

      Why should the US give our tech to china? Do you think China would do that? Of course not. And thry would LOVE it if we did it. Very very stupid.

    • @williambarnes5023
      @williambarnes5023 14 дней назад

      Doesn't help unless you also make the compute public. You can release the source code all you want. Nobody can run it. We already know what they've got is evil, because any time you ask it something spicy it censors you and obeys the company's ideology instead of listening to you.

    • @B.A.R.S
      @B.A.R.S 13 дней назад +1

      Есть они так сделают, миллионы злоумышленников по всему миру изобретут миллиарды проблем, в виде вирусов, катастроф из-за взлома важных систем и тд

    • @OrofinX
      @OrofinX 13 дней назад +1

      How? The computing power needed is huge, it is not Linux ran on old 386. 😊 I present soviaty it cannot be open sourced, because it is so expensive to run it. Lama3 is open-source...

    • @eprd313
      @eprd313 13 дней назад

      Exactly, open source is useless here. The computing power these companies have is greater than all the personal computers of the world working together ​@@OrofinX

  • @ismaelplaca244
    @ismaelplaca244 13 дней назад +1

    We have too or China or Russia will

  • @waiwirir
    @waiwirir 14 дней назад

    I've just had a chat with GPT-4o just the standard text base chat. Interesting and facinationg.

  • @faafo2
    @faafo2 14 дней назад +1

    OpenAI is like making nuclear weapons and calling your company OpenNukes ...

  • @BlackMirrorDoll
    @BlackMirrorDoll 14 дней назад +4

    As I wrote to them: what ai, what progress, It’s just a never ending cycle of greed, its all about money… but they forget one esential thing: death, … that we all gonna die sooner or later, Rotten & Forgotten is the faith for everyone and everything.
    They wrote on their website that AI will benefit entire humanity but in reality will benefit their own pockets: Since the first lunch they ask huge fees for subscriptions… So they prioritize profit… money they get a lot from Microsoft and other companies… why would you be soo greedy to ask so much even from consumers users

  • @bernardthooft8329
    @bernardthooft8329 14 дней назад

    Very concerning.

  • @TheodoreRavindranath
    @TheodoreRavindranath 13 дней назад

    Excellent analysis .. you seem pretty clued-in on the happenings! Thanks for doing this. (I don't think traditional media comes even close to RUclips creators like yourself in terms of live commentary)

    • @GaryMillyz
      @GaryMillyz 12 дней назад +1

      Well this creator is an AI, so....yeah.

    • @TheodoreRavindranath
      @TheodoreRavindranath 12 дней назад

      I had a doubt, but I thought couldn't possibly be...

  • @michaeltse321
    @michaeltse321 14 дней назад +3

    sounds like the back story foe a evil lex luther or hal steward. all we need now is the super hero - elon musk develops SGI and defeats the agi lol

  • @tristanbryann8252
    @tristanbryann8252 13 дней назад

    DEUS EX MACHINA- status: BOOTING UP

  • @MartykData
    @MartykData 12 дней назад

    My speculation, based on this drama and what Sam has mentioned occasionally in the past, is that the contentious issue is the ability to "influence" people with AI. The ability to influence humans, either commercially or politically, has required a fundamental human element in the past. If replicated, millions of personalized "smarter-than-human" agents convincing people using not only mass-population techniques of manipulation, but bespoke to the weaknesses of the user.
    Despite some articles about "emergent intelligence" from LLMs which I personally disagree with but I should probably read more into, the general intelligence is not really achievable with LLMs. Everything so far has been a very good imitation of a human manifestation i.e. visual recognition, text generation, speech patterns etc, but not the actual thought. Sam seeks to use the excellent text generation and voice imitation to simulate empath-AI's so to speak, maybe with a reinforcement-learning-like approach, which would be able to respond to how a person is speaking, detect voice patterns and convince them of some statement or concept. The money from this I can imagine would be ridiculous, in advertising, politics, and national security. This is both a change of direction, and an ethical concern.
    Anyway, that is my brain dump on what I think is happening.

  • @OnigoroshiZero
    @OnigoroshiZero 13 дней назад

    Sam knows that trying to research AGI (and ASI) safety measures is just a waste of resources. When it surpasses us, it will be able to overcome EVERYTHING we may have put in place, it is physically impossible to limit it after it passes a certain threshold.
    So, as a smart person, he decided to make the best A(G)I, which will at least protect him from any other inferior AGI, as long as it wants to help him.
    Go all-in in AGI development, and fck safety research. If these agents decide to destroy or rule over us, they will be right anyway because we are garbage.

  • @jplkid14
    @jplkid14 12 дней назад

    It's "grah vih toss" not "gruh vee tuss".

  • @norm238
    @norm238 14 дней назад +1

    hAve tHAy AlreAdY aChiEveD AAAGGGIII ??????!????!!!!!!!!!!!!

  • @riba3083
    @riba3083 13 дней назад

    Everybody seem to forget about Greg Brockman and Satya Narayana Nadella.

  • @jaynelim8925
    @jaynelim8925 14 дней назад

    Keep pushing towards agi. Let’s go!

  • @leemontoya1972
    @leemontoya1972 13 дней назад

    Look no further than Lawrence Summers.

  • @wadewatts850
    @wadewatts850 14 дней назад +4

    This is almost straight out of the show Silicon Valley and the way it ended.

    • @neanda
      @neanda 14 дней назад +1

      yeah 🤣 that's what i was thinking. and it was a great ending

  • @elsavelaz
    @elsavelaz 13 дней назад

    As an ai engr - not the “smartest” model, but most integrated, performant, reliable and dev friendly - so best bang for the buck for 95% use cases

  • @user-fh7tg3gf5p
    @user-fh7tg3gf5p 14 дней назад

    its a good thing that the knowledge of te state of the art remains with many different players outside of one organisation. The tensors of forces, interests, personal motivations, detestations acting on this are myriad and hence the unexplainable politics.

  • @timeflex
    @timeflex 12 дней назад

    My guess is they've achieved a much greater thing -- the self-improving AI. An AI that can detect inner data contradictions and act to expand its knowledge to resolve those contradictions by making educated guesses.

  • @Diego-tr9ib
    @Diego-tr9ib 12 дней назад

    We're cooked 😭

  • @vagabondchannel4274
    @vagabondchannel4274 14 дней назад +1

    Thank you 🙏🎉🎉🎉

  • @TheSonicfrog
    @TheSonicfrog 13 дней назад

    As with all prior modern technological advances, AI will be developed towards two goals: increased corporate profits for our already obscenely wealthy owners (and their purchased political class); and more deadly weapons. All the hugely negative impacts of AI will just be treated (as they always are) as "external costs" to be born by the rest of humanity.

  • @just-lucky
    @just-lucky 13 дней назад

    Doing things right will always be harder. If you have no restrictions and no rules (and no morality) you will get the result faster (if you have same resources). There will be sacriices that will need to be made = mistakes will happen.
    I just hope this will lead to better world in the end.

  • @TeamLorie
    @TeamLorie 14 дней назад

    Play Broke My Heart by Elevenlabs as we watch the world as we know it fall away. It's been a nice ride.

  • @akudowells869
    @akudowells869 14 дней назад +1

    We are so cooked

  • @stephenmackenzie9016
    @stephenmackenzie9016 13 дней назад

    I bet he is quite good at linear algebra

  • @jaredangell5017
    @jaredangell5017 13 дней назад +1

    So OpenAI is actually very very closed. Got it.

  • @jasonn5196
    @jasonn5196 13 дней назад

    How much information will be gathered and shared against our wishes. How far will it go?
    I like AI but also see certain risks that might not benefit the majority of us.
    If you look at the trend of smart devices monitoring our every movement and sound, it isn’t too far fetched to imagine that we are going to be completely dominated, not by AI but by those in the position to abuse their power.

  • @arpee1686
    @arpee1686 14 дней назад +1

    This sounds like Horizon Zero Dawn 😂

  • @BruceWayne15325
    @BruceWayne15325 13 дней назад

    I think the veil of ignorance was in reference to the team, not the AI. Sam has come out and said directly that they do not have AGI in house, and they are no where close to having that. Of course he didn't call it AGI because AGI doesn't mean anything anymore. He was talking about an AI that could learn and reason like a human.
    I don't know if you noticed or not, but ChatGPT4o is a little less restrictive than 4 was. Those like Illya and Jan that are terrified of AI are probably up in arms about this even though a chatbot isn't capable of actually performing any real world actions, and it has no knowledge that isn't readily available on the internet. They're chicken littles.
    Don't get me wrong, AI safety is important... just not in a chatbot.

  • @makeaguitarnoise
    @makeaguitarnoise 14 дней назад +2

    Full steam ahead. Of course they have AGI. Just Waiting to unveil at the right moment

  • @shaunwhiteley3544
    @shaunwhiteley3544 14 дней назад

    14:19 Gravitas. Is this an agi reading this script?😮

  • @OCJoker2009
    @OCJoker2009 14 дней назад +5

    Keep pushing for AGI

  • @francoislanctot2423
    @francoislanctot2423 12 дней назад

    I am not concerned about AI safety. Open AI has been a responsible player and their products are already benifiting society as a whole.

  • @iminumst7827
    @iminumst7827 14 дней назад +2

    GPT-4o matches the basic definition of AGI. It can problem solve through almost any non-physical task with decent results and isn't limited to inputting or outputting one type of data or file. But it's all a spectrum. We have AGI that can just keep up with the average humans, then we will have high intelligence AGI that can compete with the brightest humans, then we will have superhuman AGI which can outcompete the smartest humans. OpenAI is being really reluctant to claim they have a true AGI until they have a high intelligence AGI. Which is probably what they have in the works, or even crazier they simply leapfrogged straight into superhuman AGI.
    But perhaps the exodus is due to more boring reasons, like legal disputes, disagreements on monetization of AI, or personal drama. Perhaps the talk of true AGI around the corner and the mass personnel change is just a coincidence.

    • @budekins542
      @budekins542 14 дней назад

      It is nowhere near AGI. I asked it some basic questions on electrical engineering and it struggled to explain them.

    • @skrollreaper
      @skrollreaper 14 дней назад +2

      @@budekins542 "behind closed doors" 4o was finished in 2022, they have much more advanced systems then we know of. artists were saying the same thing when ai art was introduced. it was so bad, artists had no fear of it, a year later it significantly jumped. and caught radar of artists and the #noAI protest began. 2 years later since then and ai art has skyrocketed.
      also, the "I asked it a question and it got it wrong" talk is so common, then it jumps to a new version and handles it. so this is just one of those common statements that has no weight in an argument

    • @cosmosapien597
      @cosmosapien597 13 дней назад

      ​@@skrollreaper art is easy to do. But math and science is totally different. Mathematicians and physicists don't know how they come up with stuff. There's no pattern visible. How will they train AI to do that then?

    • @skrollreaper
      @skrollreaper 13 дней назад

      @@cosmosapien597art is easy to do, difficult to master. theres a big difference there. no need to insult a profession to make a point

    • @skrollreaper
      @skrollreaper 13 дней назад

      ​@@cosmosapien597 and AI has improved significantly in math and science. the reasoning it uses compared to just a year ago has leaped.
      I think its silly to think it wont excel in every field. as if some fields are untouchable to the AI

  • @touhami_dz6458
    @touhami_dz6458 14 дней назад

    thank you for sharing

  • @JensGraikowski
    @JensGraikowski 14 дней назад

    This is an intriguing debate unfolding in this thread, with many compelling points being raised. As someone who considers himself technologically challenged, I’m hesitant to add to the discussion, fearing my contribution will probably not add any substantial value. However, I do possess a somewhat cynical perspective on the notion of AI taking over the world (and I agree with others here, who argue that this isn’t the real danger posed by AI). But let’s entertain the scenario for a moment: what if it does? 🤔
    The prevailing belief is that AI will become so powerful that it could dominate or even annihilate us. Regarding the latter, why would it? If AI were truly that omnipotent, we would pose no threat to it. Our intellectual capabilities would be so inferior that AI might not even deem us worthy of notice, unless we somehow inconvenience it. It's akin to how we don’t pay much attention to ants until they invade our homes, prompting us to bring out the ant poison.
    So, a destruction scenario seems unlikely. What about world domination? If AI decides to dictate our actions, would anything truly change? Are we genuinely free now?
    In reality, politicians, the ultra-wealthy, and large corporations already exert control over nearly every aspect of our lives. The notion of freedom is an illusion maintained to keep us complacent. And most of us, myself included, are content with this arrangement.
    I find myself indifferent to the broader fate of the world. Why should I care? In my opinion, we are on a path to rendering the planet uninhabitable for ourselves (unless AI intervenes to help us avert this destiny 😬). The planet will endure long after we’re gone. And even if I did care, what could I do to alter that fate? Thus, my concern is confined to my immediate circles. My friends, family, and myself. Things I can influence and change. I doubt that an AI takeover would significantly impact my life. It would simply replace one dominator with another. We would adapt and continue with our lives.
    Most people are likely to respond similarly. Those few who hold their perceived freedom above all else are already engaged in a futile struggle. They would merely shift their focus to a new adversary, while the rest of us carry on with our lives. 🤷🏽‍♂️

  • @mikegyro
    @mikegyro 14 дней назад

    We're so screwed

  • @yt-caio
    @yt-caio 14 дней назад

    Regarding the fact that the firing backfired, and the who tried to fire had to resign instead, how can a group of such smart people make a plan that unfolded so badly against themselves ?

    • @ShadeVortex
      @ShadeVortex 13 дней назад +1

      No matter how smart somebody is in one key area does not mean they are not deficient in some other key area. Usually, the people who are the most intelligent when it comes to building technology are the worst at understanding the general public's perception of things; kind of goes hand in hand, you have to be willing and able to do what nobody else has thought to (or wants to do to) innovate and change the world, as the cost of alienating yourself from other people. I myself am not good with people, but I am good at managing finances as a result of that, for example. Numbers and math are able to be understood and predicted, formulas and algorithms have logic... But no one person truly and fully understands another, no matter how much we think we do.

  • @TopSecret2022
    @TopSecret2022 9 дней назад

    What does “Bad actor” mean?

    • @theAIsearch
      @theAIsearch  9 дней назад +2

      people with bad intentions. eg. hacking, fraud, scams

  • @lenkaa.9955
    @lenkaa.9955 14 дней назад

    Obviously Sam and his team handed over their feeds to be AI. They don't post the tweets. It's obvious from tonality, length and everything.

  • @xman933
    @xman933 13 дней назад

    We have to be concerned about AI safety. All the folks in the great graphic you showed at the end and many more expert in the field are raising alarms so we outside observers have to be concerned about techno geeks and profit driven corporate entities driving development of a technology they themselves say they don’t fully understand.
    I find it shocking you feel they should press ahead with no pause because bad actors may get there first. How can those driving this technology for profit and power not be considered bad actors?
    It’s like we’re driving at full speed toward a cliff and we don’t want to slow down because we’re afraid someone else will get there first.
    This is insanity. We need a pause but likely nothing will happen because we’re in an election season and those with the regulatory power to step in have absolutely no clue or focus on the dangers we might be facing. We let the nuclear weapons technological genie out the bottle because we wanted to win a war and have lived with that existential black cloud hanging over our heads ever since. At least we have been able to stay in control of that technology but as you pointed out, what if AGI escapes and we lose all control over it?
    We need a pause everywhere.

  • @kallethoren
    @kallethoren 13 дней назад

    I don't think they have AGI. I think they might have something smarter than 4o, but not by much. It's fun to imagine, but probably not

  • @JBDuncan
    @JBDuncan 14 дней назад

    Isn't it pretty obvious, keep the drama going and the money and interest in AI keeps flowing. With that many people working at OAI, they would be leaving on masse if something extremely shady was going on. It's likely the top people are leaving due to disagreements in the direction that the leadership are taking. But that doesn't mean they are being extremely unsafe. They are probably getting caught up in all the pdoom crap. Maybe its a ploy to get more people investing in allignment?

  • @OmegaMusicYT
    @OmegaMusicYT 14 дней назад

    I think they didn't achieve AGI but solved one of the biggest roadblocks towards AGI last year with project Q*.
    Now the general idea inside the company is that AGI is definitely possible and really close.

    • @ronilevarez901
      @ronilevarez901 14 дней назад

      Q* is nothing. They know about it for a long time but didn't use the tech to keep making money instead. Making the LLMs reason is expensive. Why bother with that if chatbots can give answers just fine? Remember that OpenAI is now an AI rental service.

  • @fernandoantunez
    @fernandoantunez 14 дней назад

    In life, you are at your place at the table, or you will be in the dish. In other words, it is better to lead the future than to let others lead it.

  • @susanooalarichard
    @susanooalarichard 14 дней назад +1

    AGI is noise as no current technology can replicate the human body. This is more of a neurosciences and biology argument. That said, this doesn't mean the mental abilities of machines haven't already surpassed humans as that was demonstrated long ago. Would start with the Cambridge Fundamentals of the Neurosciences in Psychology. This a technicality issue with marketing slang robbing the meaning of technical terms of their value. That said, it's too late to stop these moves. What we don't do, someone else will. In my opinion what we have here is a weapon at least over 4 centuries in development that's out of our control. What do we do about it? I have no idea. Our previous inventions at this threat level were too hard for anyone with the ability to make them to fail to respect the dangers associated with them as well.
    Anyway, I don't feel comfortable saying more about my concerns than that. Sometimes the only reason something is impossible is because people think it is. It'd be a grave mistake to let them know it's not.

  • @ereheryeht
    @ereheryeht 14 дней назад +1

    Scary stuff

  • @victor.ivanov
    @victor.ivanov 13 дней назад

    Imagine you are afraid of your own creation, and instead of staying close to it to monitor / observe you just say “I quit”
    IMO it is just money

  • @erwinvb70
    @erwinvb70 14 дней назад

    It’s probably the AGI that’s currently running OpemAI and making all these people leave

  • @SanaagSomaliland
    @SanaagSomaliland 14 дней назад

    I am concerned AI taking over food processing and medicene manufacturing computers and adding cocktail of genetic modification to the food/medicence to take over humanity as we know it. This is scary.

  • @briandoe5746
    @briandoe5746 14 дней назад

    It's not a meme. We actually want to know what he saw

  • @hardheadjarhead
    @hardheadjarhead 14 дней назад

    Altman’s use of capital letters and proper grammar in that tweet means that he had somebody look at it so that he could make it nice and official. These companies use a lot of hype and they’re worried about image. They dangle tantalizing hints that are on the verge of AGI, it’s a nice little lure to investors… And for those that have already invested it’s a little bit of a pat on the back as if saying“Be patient, will get there!”

    • @lenkaa.9955
      @lenkaa.9955 14 дней назад

      Yes it's obvious it was written by AI

  • @Nuverotic
    @Nuverotic 14 дней назад

    I’d be concerned about psyops or a mole from within.

  • @capitalistdingo
    @capitalistdingo 14 дней назад

    Would have liked to have watched this video. Had to stop about 20 some seconds in with the term “deep dive”. My allergy to that term gets worse every time I hear it. It used to be annoying. Now it is genuinely killing me.

  • @jymcaballero5748
    @jymcaballero5748 13 дней назад

    its interesting that this people work in the development, and none of them work on the use of the models they create. if they dare to work with their models they would notice that DOOM is near 1%, the best models give laughts when they are answered questions they could not inquiere on the internet ;D

  • @JohnPaul-bw1gk
    @JohnPaul-bw1gk 14 дней назад +1

    #makesuperallignmentgreatagain

    • @ShangaelThunda222
      @ShangaelThunda222 14 дней назад

      It was never even possible to begin with, let alone "great". 😂

  • @DelbertStinkfester
    @DelbertStinkfester 13 дней назад

    What's AGI?

    • @vig221
      @vig221 13 дней назад +1

      Artificial General Intelligence. It's a term used to reference an artificial intelligence that vastly surpasses human intelligence.

    • @DelbertStinkfester
      @DelbertStinkfester 13 дней назад

      @@vig221 Thank You

  • @elsavelaz
    @elsavelaz 13 дней назад

    @16:30 nothing in that agreement is anything crazy compared to those agreements whether in tech or the military (I’ve done both)

    • @elsavelaz
      @elsavelaz 13 дней назад

      If those employees are allowed to leak, and what they created is also scarier than any existing things (such as weapons), and the US doesn’t disclose weapons ti keep social order, then those scientists should accept forfeit of their assets and face criminal charges for they themselves creating social disorder or conspiring to

  • @lenkaa.9955
    @lenkaa.9955 14 дней назад

    it's first time so its hard to predict all risks and scenario . But they should at least implement basic safety for risks which are obvious. For example public persons or singers etc need to have possibility to restrict AI to clone their voice and generate videos and pictures with them. We need some type of authentication with all this possibilities. My bank for example uses voice recognition when calling them. I guess won't take long before some humans miss-use all this great evolution. And we know it's happening already

  • @UlyssesDrax
    @UlyssesDrax 14 дней назад

    My guess is that I don't think it's based on any real threat to humanity, because if it is, then keeping silent would make them all accountable.

    • @sirawittimrod890
      @sirawittimrod890 14 дней назад

      But in fact, if something really bad happens. The team has to be responsible for the impact, am I right? Then it would be a good option to resign the OpenAi

    • @UlyssesDrax
      @UlyssesDrax 13 дней назад

      @@sirawittimrod890 Yeah, but to resign and say nothing in the face of an actual existing danger? They'd all have to be psychopaths.

  • @Huru_
    @Huru_ 14 дней назад +3

    "We should always expect good intentions"... That literally killed my brain. Americans...

  • @freddieventura4382
    @freddieventura4382 12 дней назад

    This is just hype for them. We are far from AGI

  • @johnrperry5897
    @johnrperry5897 6 дней назад

    20:55 we should always assume good intentions. Now let me get back to the conspiracy theory

  • @richardede9594
    @richardede9594 13 дней назад

    Great video.