Why Anthropic's Founder Left Sam Altman’s OpenAI

Поделиться
HTML-код
  • Опубликовано: 14 май 2024
  • Amazon recently announced that it will invest up to $4 billion in Anthropic, one of the buzzy startups building a generative AI chatbot. With the deal, Anthropic secures a major investment from a second Big Tech company, after already receiving $300 million from Google at the end of last year. Google’s investment, which wasn’t reported at the time, gave Google an estimated 10% stake in the company.
    In its latest fundraising round in May, Anthropic was valued at nearly $5 billion, according to TechCrunch. An updated valuation was not disclosed for Amazon’s most recent investment nor details about the size of its minority stake.
    Amazon becomes the latest major player to invest in up-and-comers in the generative AI world, after ChatGPT maker OpenAI received $13 billion from Microsoft.
    Amazon will invest $1.25 billion up front with the possibility for another $2.75 billion later, for a potential total of $4 billion. Amazon declined to provide further details about the deal’s structure.
    In our interview with cofounder and CEO Dario Amodei, he lays out his three-tiered fear model in response to a question by Jeremy Kahn about the existential risks posed by A.I. that has safety concerns that even Sam Altman is worried about.
    00:00 - Leaving OpenAI To Form Anthropic
    00:57 - Creating Claude
    03:26 - Setting Constitutional AI
    06:18 - Data Privacy And Storage Concerns
    07:10 - Government Regulations
    08:26 - AI In Robots
    09:48 - Existential Risk
    11:27 - Open-Source Models
    12:29 - Climate Impact
    13:24 - AI Risks
    Subscribe to Fortune -
    ruclips.net/user/subscription_c...
    Fortune Magazine is a global leader in business journalism with 55 million monthly page views and a readership of nearly 32 million, with major franchises including the Fortune 500 and the Fortune 100 Best Companies to Work For. The new Fortune video channel dives into personal stories from business owners and entrepreneurs becoming successful in business and sharing their tips to help you reach your goals.
    Website: fortune.com/
    Facebook: / fortunemagazine
    Twitter: / fortunemagazine
    TikTok: / fortune
    #ai #amazon #technology

Комментарии • 105

  • @posthocprior
    @posthocprior 7 месяцев назад +45

    This interview didn't address why Amazon invested billions. Here's my guess: based on the strong emphasis on safety and training and developing the model, this chatbot is most likely going to replace some customer service representatives. Given the example of reading and interpreting a balance sheet, it could be used to clarify billing questions from customers. That is, the chatbot could see a bill from an Amazon customer, hear what the problem is, and try and either explain the billing problem or resolve it. My guess: a significant percentage of Amazon's customer service deals only with billing problems. Also -- just a guess -- Amazon tried either to build their own chatbot or license it from OpenAI and the combination of time needed to develop it and the cost was greater than $4 billion.

    • @avidlearner8117
      @avidlearner8117 7 месяцев назад +1

      Yep!

    • @Goohuman
      @Goohuman 6 месяцев назад +2

      I'm sure that is one of many revenue streams Amazon can capitalize upon. I'd add that Amazon is also a source of massive data, needed to create the AIs in the first place. Of course they want it to benefit their company.

    • @SentimentalMo
      @SentimentalMo Месяц назад +1

      “Bedrock” he said do training on your own data in aws? How much electricity will Amazon use if it host most private data source training? Most of the data in the world is in private hands. 🤔😊

    • @nonefvnfvnjnjnjevjenjvonej3384
      @nonefvnfvnjnjnjevjenjvonej3384 Месяц назад

      its not that hard. all the big companies need to be in the next big thing if they want to remain big. microsoft has their tentacles in open ai so amazon went for the next best thing.

    • @alainportant6412
      @alainportant6412 Месяц назад

      LICENSE IT ? Amazon could buy the entire industry how dare you

  • @Kai-ne3ks
    @Kai-ne3ks 2 месяца назад +4

    He’s so much more in touch with his emotions than Altman or Ilya - essential for an aligned AGI. Also this translates through Claude 3 Opus - which can create fiction text, incredibly psychologically complex.

  • @ThierryQuerette
    @ThierryQuerette 7 месяцев назад +49

    🎯 Key Takeaways for quick navigation:
    00:28 🧠 The founders of Anthropic left OpenAI with a strong belief in two things: the potential of scaling up AI models with more compute and the necessity of alignment or safety.
    01:28 🛡️ Anthropic's chatbot Claude is designed with safety and controllability in mind, using a concept called "Constitutional AI" for more transparent and controlled behavior.
    03:38 🤖 Constitutional AI is different from meta prompting; it trains the model to follow an explicit set of principles, allowing for self-critique and alignment with those principles.
    07:42 ⚖️ When discussing AI regulation with policymakers, the advice is to anticipate where the technology will be in 2 years, not just where it is now, and to focus on measuring the harms of these models.
    12:37 🌍 Concerns about the climate impact of large-scale AI models are acknowledged, but the overall energy equation-whether these models ultimately save or consume more energy-is still uncertain.
    Made with HARPA AI

    • @1anre
      @1anre 7 месяцев назад +1

      This was brilliant.
      What other capabilities does HARPA have?

  • @hotdiary
    @hotdiary 5 месяцев назад +1

    I really like the interview. Great questions.

  • @mrcookies409
    @mrcookies409 5 месяцев назад +25

    This guy is cooler than Altman.

    • @jaedme
      @jaedme Месяц назад

      coolness competition?

    • @mrcookies409
      @mrcookies409 Месяц назад +2

      @@jaedme Yeah he explains things better.

    • @devstuff2576
      @devstuff2576 Месяц назад

      you just don't like Altman (for no logical reason) and that's fine

    • @mrcookies409
      @mrcookies409 Месяц назад

      @@devstuff2576 nope, Altman is fine, this just guy is simply cooler

  • @abagatelle
    @abagatelle 7 месяцев назад +11

    Claude's large context window is excellent.

  • @billhanna8838
    @billhanna8838 7 месяцев назад +4

    Met Kamala , I bet that was' Mind blowing full on conversation ?"

  • @fortune
    @fortune  7 месяцев назад +1

    To read more about Amazon's investment in Anthropic, click on our story here: fortune.com/2023/09/25/anthropic-ai-startup-4-billion-funding-amazon-investment-big-tech/

  • @Jefemcownage
    @Jefemcownage 7 месяцев назад +6

    imho the constitutional model is very annoying to chat with as it claims to be all knowing, bound by whatever constitution it confined by which is inherently impossible.

  • @bobdagostino5472
    @bobdagostino5472 5 месяцев назад +21

    What a shock, he wants regulations on open-source models that can compete with his company's proprietary offerings.

    • @bigglyguy8429
      @bigglyguy8429 2 месяца назад

      Yeah, it's pathetic and dangerous, not to mention no fun. Open source is the only safe way forward. We must, surely be now, have learned we cannot trust any government or corporation. ANY.

    • @davidkey4272
      @davidkey4272 15 дней назад

      It's disgusting. Fortunately, they will go to zero. There is no value in base models at this point. They are all converging.

  • @MrSchweppes
    @MrSchweppes 7 месяцев назад +1

    Love listening to Dario Amodei

  • @sapienspace8814
    @sapienspace8814 8 дней назад

    @ 5:30 I find it very interesting that when Sam talks about "RLHF vs. Constitutional AI" that he does not mention how Reinforcement Learning (RL) is also used inside "Constitutional AI". In other words, the Human Feedback (HF) is replaced by automation from principle statements and RL.
    In the lawsuit between Elon Musk and Open AI it was revealed in an email (2018) that the "the core technology" they are using is from the "90s".
    RL was originally funded by the USAF prior to 1997.

  • @7mikeraj
    @7mikeraj 7 месяцев назад

    Good discussion!

  • @onlyagreeingsometimes
    @onlyagreeingsometimes 7 месяцев назад +2

    💳🤔The risk is those who stop you from not doing things if you don't want to use it... it should be an on-and-off switch.. its the same thing with cash vs plastic or phone swipe 💳 🤔

  • @davidkey4272
    @davidkey4272 15 дней назад

    Whenever you hear "safety" you should think "censored." And in that sense it is odd that he left because both companies are clearly prioritizing "safety."

  • @johnny6756
    @johnny6756 7 месяцев назад +9

    Wow, this guy is impressive. Seems to be highly intelligent, but also highly mature with a very "close to reality" view of things i seems to me. Makes me less scared about the AI future

  • @joyjitpal
    @joyjitpal Месяц назад

    When will claude have access to internet ?

  • @quakers200
    @quakers200 7 месяцев назад +2

    Do we even know to what extent these companies can be held liable for answers it provides? Oops we just figued out how to eliminate a third of our workforce.

    • @mughat
      @mughat 6 месяцев назад

      Search up "Luddite". You might be one.

  • @simokokko7550
    @simokokko7550 7 месяцев назад +3

    He says there might be a risk, 10-20 % chance that things go wrong. I wonder what he means about something going wrong. "Mildly" wrong or catastrophe? If it is a catastrophe, 10-20 % is a terribly high chance.

    • @flipp081
      @flipp081 7 месяцев назад +1

      probably catastrophe

    • @drjux2114
      @drjux2114 7 месяцев назад +1

      Lmao I was looking for this comment, I'm pro AI, and I couldn't pass that 10-20% chance, coming from him lmaoo

    • @davidkey4272
      @davidkey4272 15 дней назад

      No one ever provides a model for what that means. It's probably a fear that it might use the "N" word.

  • @kawalier1
    @kawalier1 7 месяцев назад +3

    100.000 tokens more to fine tune this model with single prompt 😂 CEO of the stable diffusion has proper approach to set up a private domain in the area of private customization model customizations ,,😎

  • @gmenezesdea
    @gmenezesdea 23 дня назад

    I don't trust anyone working in AI to have our best interests in mind.

  • @kevinr8431
    @kevinr8431 7 месяцев назад +1

    Would you be so much more helpful if you would show examples and let the product demo itself

  • @CallSaul489
    @CallSaul489 18 дней назад

    I don’t like the idea of a small group of people deciding what the “model’s values are”.

  • @yoursubconscious
    @yoursubconscious 7 месяцев назад +3

    if they were confident, why didn't they give a live demo?

  • @Longtermalwayswins
    @Longtermalwayswins 6 месяцев назад +14

    Why he left: for money. Done

    • @ehza
      @ehza 5 месяцев назад

      Precisely lol

    • @william8632
      @william8632 Месяц назад

      😂

  • @user-lb2gu7ih5e
    @user-lb2gu7ih5e 29 дней назад

    By YouSum
    00:00:23 Pouring more compute improves models indefinitely.
    00:00:35 Safety and alignment are crucial in scaling models.
    00:01:06 Claude chatbot prioritizes safety and controllability.
    00:02:00 Constitutional AI ensures transparent and controllable model behavior.
    00:02:12 Claude's large context window allows processing extensive text.
    00:03:33 Training AI with principles differs from meta prompting approaches.
    00:04:18 Constitutional AI self-critiques to align with set principles.
    00:10:11 Concerns about AI risks evolve from bias to existential threats.
    00:11:48 Balancing open-source AI benefits with safety concerns is crucial.
    00:12:37 Considerations about the environmental impact and energy usage of models.
    00:13:35 Optimism tempered with caution about the future of AI technology.
    By YouSum

  • @JustinHalford
    @JustinHalford 7 месяцев назад +3

    Watching Dario speak so openly about the risk of AI, especially of open source models, is sobering. He is clearly concerned about the future impacts of the technology.

    • @bigglyguy8429
      @bigglyguy8429 2 месяца назад +1

      And it's why he won't be getting my money or that of any company I advise. Sick of censorship monkeys on my back. They're holding back progress to enrichen themselves, period.

  • @edwardmartin243
    @edwardmartin243 5 месяцев назад +1

    We're all dead.

  • @instantkevlar4763
    @instantkevlar4763 5 месяцев назад

    Seth Rogan is so smart. He even knows AI.

  • @bigglyguy8429
    @bigglyguy8429 2 месяца назад

    Welp, 11:50 made my mind up for me. Was considering switching my sub from GPT to Claude, but having heard his approval of censorship there's no way this Claude thing is getting my money. I'd pay double for GPT4 if uncensored. First company with the balls to do that wins.

  • @USONOFAV
    @USONOFAV Месяц назад

    I'm glad he did. Claude 3 is much better than GPT4

  • @brasidas33
    @brasidas33 7 месяцев назад +3

    Short Amazon

  • @adamy4435
    @adamy4435 Месяц назад

    Wtf his name in the title 😢

  • @zalzalahbuttsaab
    @zalzalahbuttsaab 4 месяца назад +1

    Yeah I just tried Claude. Nice, clean interface. Makes up things as it goes along. I won't be using it beyond the one session that I had with it. I'm sticking with chatGPT.

  • @AjaySharma-me1sy
    @AjaySharma-me1sy 5 месяцев назад

    Dario looks Dan Melcher from Silicon Valley (the guy whose wives (yes wives) Erlich Bachman sleeps with)

  • @dawncc1
    @dawncc1 5 месяцев назад

    Why does everyone have a belief that only the US is creating AI? How does safety align with that?

  • @BigDataLogin
    @BigDataLogin 6 месяцев назад

    Cool

  • @theencryptedpartition4633
    @theencryptedpartition4633 2 месяца назад

    Bruh, you're not making it easy for both. The more AI companies there will be up there, producers of stuff like GPU are simply gonna hike up the prices and it will be good for neither

  • @edwardj3070
    @edwardj3070 5 месяцев назад +1

    recent gen GPT looks to be truly disruptive of the workplace. that's a good thing. hope it puts 75 million people out of their somewhat worthless repetitious data processing jobs ASAP. force real questions about the economy. in the US, anyway, we had "enough" for everyone decades ago

  • @blackspetnaz2
    @blackspetnaz2 Месяц назад

    Short answer. MONEY! I saved you 14 min.

  • @user-os5wd1ms6o
    @user-os5wd1ms6o 7 месяцев назад

    = GAN

  • @-adrian.
    @-adrian. 7 месяцев назад

    Ready ?

  • @The12thSeahorse
    @The12thSeahorse 7 месяцев назад +7

    I would like to know, if AI is so wonderful and exciting, why can’t AI give out answers to the climate change problems?

    • @roberthuff3122
      @roberthuff3122 7 месяцев назад +1

      Turning to AI to give ‘answers’ to large complex problems is the short, direct path to tyranny. Think for yourself.

    • @billhanna8838
      @billhanna8838 7 месяцев назад

      @@roberthuff3122 thought Climate Change was planed for that ?

    • @jessieadore
      @jessieadore 7 месяцев назад

      Be a AI, particularly the way they intend to scale it, contributes to climate change

    • @ChristianKleineidam
      @ChristianKleineidam 6 месяцев назад +3

      "Climate change problems" is a series of a lot of different problems and AI is actually giving out answers that help with some of them. It for example helped Google reduce the energy they need to use to cool their data centers by 40% back in 2016.

    • @kavinho
      @kavinho 5 месяцев назад +1

      It’s highly unlikely that the knowledge they’ve been training on can be extrapolated to give an answer.
      To be able to answer such fundemental unknown questions an AI models of today need to operate as agents in the physical world to be able to make scientific discoveries and make conclusions from them.

  • @NewCalculus
    @NewCalculus 7 месяцев назад +3

    Anthropic's Claude is by far the best Chatbot. Nothing else even comes close. ChatGPT who?

  • @senju2024
    @senju2024 6 месяцев назад

    The so-called 100,000 token context window is now old school. MemGPT uses Virtual context via function calls which allows unlimited memory. I would not brag about the already limited token context window that he is boasting. But I will give some credit as this video is already a month old and that is old tech regarding AI progress.

    • @A5tr0101
      @A5tr0101 Месяц назад

      Sounds like a terriblly resource intensive and badly designed AI to me

  • @PankajDoharey
    @PankajDoharey 5 месяцев назад

    There was an oppurtunity to make money why wouldnt he go ahead any start a new company ? Though Anthropic is far Behind currently compared to OpenAI, but i think eventually everyone will catchup.

  • @ivanf2782
    @ivanf2782 2 месяца назад +1

    Why not puting his name on the Video? Why nor puting his name at the top of the video description? Come on! people who watch your videos are smarter and more interested in getting to know all people in the field, not only (clilcbating ) using others people names to get attention lis Sam Altman. Can you improve on that?

  • @jialx
    @jialx Месяц назад

    Is the guy who owns an AI company a pessimist or an optimist about the future of AI 🤪

  • @avidlearner8117
    @avidlearner8117 7 месяцев назад

    Yeah but…. Claude is wrong a LOT…. And often. Makes up stuff. And I mean Claude 2 100k version.

  • @ShadyRonin
    @ShadyRonin 5 месяцев назад +2

    I don’t trust this guy any more than I trust Sam. They’re all in it for a zero sum game of ultimate control. The idea of “keeping us safe” has been a ruse as old as human history

    • @alainportant6412
      @alainportant6412 Месяц назад

      Zuck will open source all that crap and make these two creepy dudes irrelevant.

  • @dawncc1
    @dawncc1 5 месяцев назад +1

    So it a WOKE AI?…

  • @shanecarroll7523
    @shanecarroll7523 7 месяцев назад +1

    First

  • @Goohuman
    @Goohuman 6 месяцев назад

    My concern is regarding these guidelines or AI-rules as created by humans. You are simulating wisdom. I believe a person's core values, if they come from within or from other humans, will eventually fail in some critical way. My bias comes from Christianity and I believe wisdom comes from God, so wise people behave good towards others and defend good in word and deed, as I perceive my faith requires. The real problems will manifest as the AI must decide to lie or do something that could be perceived as bad in order to support good things. Such as lie to a person bent on doing bad and misdirecting them or violently taking down a person who is very likely to harm or kill other good people, or even doing nothing while bad people are being harmed, allowing some less intelligent human to own the consequences of their actions.
    I'm sure the smart people at Anthopic have considered these matters, but I know from experience that there won't be a rule or law that governs such a being as a super-intelligent, massively capable thing that this AI can become. Human wisdom applies to the single human with all the limitations of a human in place.
    I look forward to seeing what the coders at Anthropic do on that level.