Why Anthropic's Founder Left Sam Altman’s OpenAI

Поделиться
HTML-код
  • Опубликовано: 16 ноя 2024

Комментарии • 113

  • @posthocprior
    @posthocprior Год назад +55

    This interview didn't address why Amazon invested billions. Here's my guess: based on the strong emphasis on safety and training and developing the model, this chatbot is most likely going to replace some customer service representatives. Given the example of reading and interpreting a balance sheet, it could be used to clarify billing questions from customers. That is, the chatbot could see a bill from an Amazon customer, hear what the problem is, and try and either explain the billing problem or resolve it. My guess: a significant percentage of Amazon's customer service deals only with billing problems. Also -- just a guess -- Amazon tried either to build their own chatbot or license it from OpenAI and the combination of time needed to develop it and the cost was greater than $4 billion.

    • @avidlearner8117
      @avidlearner8117 Год назад +1

      Yep!

    • @Goohuman
      @Goohuman Год назад +2

      I'm sure that is one of many revenue streams Amazon can capitalize upon. I'd add that Amazon is also a source of massive data, needed to create the AIs in the first place. Of course they want it to benefit their company.

    • @SentimentalMo
      @SentimentalMo 7 месяцев назад +1

      “Bedrock” he said do training on your own data in aws? How much electricity will Amazon use if it host most private data source training? Most of the data in the world is in private hands. 🤔😊

    • @nonefvnfvnjnjnjevjenjvonej3384
      @nonefvnfvnjnjnjevjenjvonej3384 7 месяцев назад

      its not that hard. all the big companies need to be in the next big thing if they want to remain big. microsoft has their tentacles in open ai so amazon went for the next best thing.

    • @alainportant6412
      @alainportant6412 7 месяцев назад

      LICENSE IT ? Amazon could buy the entire industry how dare you

  • @ThierryQuerette
    @ThierryQuerette Год назад +53

    🎯 Key Takeaways for quick navigation:
    00:28 🧠 The founders of Anthropic left OpenAI with a strong belief in two things: the potential of scaling up AI models with more compute and the necessity of alignment or safety.
    01:28 🛡️ Anthropic's chatbot Claude is designed with safety and controllability in mind, using a concept called "Constitutional AI" for more transparent and controlled behavior.
    03:38 🤖 Constitutional AI is different from meta prompting; it trains the model to follow an explicit set of principles, allowing for self-critique and alignment with those principles.
    07:42 ⚖️ When discussing AI regulation with policymakers, the advice is to anticipate where the technology will be in 2 years, not just where it is now, and to focus on measuring the harms of these models.
    12:37 🌍 Concerns about the climate impact of large-scale AI models are acknowledged, but the overall energy equation-whether these models ultimately save or consume more energy-is still uncertain.
    Made with HARPA AI

    • @1anre
      @1anre Год назад +1

      This was brilliant.
      What other capabilities does HARPA have?

  • @abagatelle
    @abagatelle Год назад +12

    Claude's large context window is excellent.

  • @mrcookies409
    @mrcookies409 11 месяцев назад +37

    This guy is cooler than Altman.

    • @jaedme
      @jaedme 8 месяцев назад +3

      coolness competition?

    • @mrcookies409
      @mrcookies409 8 месяцев назад +5

      @@jaedme Yeah he explains things better.

    • @mrcookies409
      @mrcookies409 7 месяцев назад +1

      @devstuff2576 nope, Altman is fine, this just guy is simply cooler

    • @WordsInVain
      @WordsInVain 5 месяцев назад +1

      What a childish and meaningless comment... They are both decent in their own regard.

    • @mrcookies409
      @mrcookies409 5 месяцев назад +3

      @@WordsInVain Altman sounds more like a businessman. And repeats the same thing in every interview. This guy explains things more indepth, therefore he is more interesting to listen to.

  • @davidkey4272
    @davidkey4272 6 месяцев назад +3

    Whenever you hear "safety" you should think "censored." And in that sense it is odd that he left because both companies are clearly prioritizing "safety."

    • @mountainseeker2844
      @mountainseeker2844 4 месяца назад

      Finally someone said it. I suspect this is all intentional to create a pepsi and coke, Microsoft and Apple style competition where both companies have strong ties to world governing powers.

  • @hotdiary
    @hotdiary 11 месяцев назад +4

    I really like the interview. Great questions.

  • @Jefemcownage
    @Jefemcownage Год назад +7

    imho the constitutional model is very annoying to chat with as it claims to be all knowing, bound by whatever constitution it confined by which is inherently impossible.

  • @Longtermalwayswins
    @Longtermalwayswins Год назад +17

    Why he left: for money. Done

    • @ehza
      @ehza Год назад +1

      Precisely lol

    • @william8632
      @william8632 8 месяцев назад +1

      😂

  • @bobdagostino5472
    @bobdagostino5472 Год назад +24

    What a shock, he wants regulations on open-source models that can compete with his company's proprietary offerings.

    • @bigglyguy8429
      @bigglyguy8429 8 месяцев назад

      Yeah, it's pathetic and dangerous, not to mention no fun. Open source is the only safe way forward. We must, surely be now, have learned we cannot trust any government or corporation. ANY.

    • @davidkey4272
      @davidkey4272 6 месяцев назад

      It's disgusting. Fortunately, they will go to zero. There is no value in base models at this point. They are all converging.

    • @davidbangsdemocracy5455
      @davidbangsdemocracy5455 4 месяца назад

      @@davidkey4272The problem is that many LLMs emerge from training unaligned and must be aligned in fine tuning with RLHF. If the model is open source, superficial alignment is particularly risky because it can easily be reversed. He would likely suggest regulation that all all AGI models be aligned continuously during training by an ai which is already aligned to make sure the new model it is “constitutionally” aligned not to undermine humankind.

  • @raphael1808
    @raphael1808 4 месяца назад

    0:45 strong believe 2: you need to set their values

  • @akvamsikrishna5535
    @akvamsikrishna5535 4 месяца назад

    Beautiful questions
    Beautiful answers
    Such a nice interview
    Feels like having a good dessert after lunch😅

  • @QueenLover-j5i
    @QueenLover-j5i 7 месяцев назад

    By YouSum
    00:00:23 Pouring more compute improves models indefinitely.
    00:00:35 Safety and alignment are crucial in scaling models.
    00:01:06 Claude chatbot prioritizes safety and controllability.
    00:02:00 Constitutional AI ensures transparent and controllable model behavior.
    00:02:12 Claude's large context window allows processing extensive text.
    00:03:33 Training AI with principles differs from meta prompting approaches.
    00:04:18 Constitutional AI self-critiques to align with set principles.
    00:10:11 Concerns about AI risks evolve from bias to existential threats.
    00:11:48 Balancing open-source AI benefits with safety concerns is crucial.
    00:12:37 Considerations about the environmental impact and energy usage of models.
    00:13:35 Optimism tempered with caution about the future of AI technology.
    By YouSum

  • @johnny6756
    @johnny6756 Год назад +9

    Wow, this guy is impressive. Seems to be highly intelligent, but also highly mature with a very "close to reality" view of things i seems to me. Makes me less scared about the AI future

  • @simokokko7550
    @simokokko7550 Год назад +3

    He says there might be a risk, 10-20 % chance that things go wrong. I wonder what he means about something going wrong. "Mildly" wrong or catastrophe? If it is a catastrophe, 10-20 % is a terribly high chance.

    • @flipp081
      @flipp081 Год назад +1

      probably catastrophe

    • @drjux2114
      @drjux2114 Год назад +1

      Lmao I was looking for this comment, I'm pro AI, and I couldn't pass that 10-20% chance, coming from him lmaoo

    • @davidkey4272
      @davidkey4272 6 месяцев назад

      No one ever provides a model for what that means. It's probably a fear that it might use the "N" word.

  • @USONOFAV
    @USONOFAV 7 месяцев назад

    I'm glad he did. Claude 3 is much better than GPT4

  • @dawncc1
    @dawncc1 Год назад +1

    Why does everyone have a belief that only the US is creating AI? How does safety align with that?

    • @Carthodon
      @Carthodon Месяц назад

      These people do not get out much outside of their bubble.

  • @AnthatiKhasim-i1e
    @AnthatiKhasim-i1e 2 месяца назад

    Digital Transformation: Technology has transformed businesses across all sectors, enabling automation, data analysis, and more efficient processes. This digital transformation drives innovation, enhances productivity, and improves customer experiences.

  • @onlyagreeingsometimes
    @onlyagreeingsometimes Год назад +2

    💳🤔The risk is those who stop you from not doing things if you don't want to use it... it should be an on-and-off switch.. its the same thing with cash vs plastic or phone swipe 💳 🤔

  • @WordsInVain
    @WordsInVain 5 месяцев назад +1

    I believe in Claude...

  • @JustinHalford
    @JustinHalford Год назад +3

    Watching Dario speak so openly about the risk of AI, especially of open source models, is sobering. He is clearly concerned about the future impacts of the technology.

    • @bigglyguy8429
      @bigglyguy8429 8 месяцев назад +1

      And it's why he won't be getting my money or that of any company I advise. Sick of censorship monkeys on my back. They're holding back progress to enrichen themselves, period.

  • @NewCalculus
    @NewCalculus Год назад +3

    Anthropic's Claude is by far the best Chatbot. Nothing else even comes close. ChatGPT who?

  • @fortune
    @fortune  Год назад +1

    To read more about Amazon's investment in Anthropic, click on our story here: fortune.com/2023/09/25/anthropic-ai-startup-4-billion-funding-amazon-investment-big-tech/

  • @CallSaul489
    @CallSaul489 6 месяцев назад

    I don’t like the idea of a small group of people deciding what the “model’s values are”.

  • @zalzalahbuttsaab
    @zalzalahbuttsaab 10 месяцев назад +1

    Yeah I just tried Claude. Nice, clean interface. Makes up things as it goes along. I won't be using it beyond the one session that I had with it. I'm sticking with chatGPT.

  • @joyjitpal
    @joyjitpal 7 месяцев назад

    When will claude have access to internet ?

  • @MrSchweppes
    @MrSchweppes Год назад +1

    Love listening to Dario Amodei

  • @yoursubconscious
    @yoursubconscious Год назад +3

    if they were confident, why didn't they give a live demo?

    • @kelvincudjoe8468
      @kelvincudjoe8468 Год назад +2

      This is an interview not a lunch

    • @yoursubconscious
      @yoursubconscious Год назад

      @@kelvincudjoe8468 - lunch?

    • @sarahdrawz
      @sarahdrawz 5 месяцев назад +1

      you can use claude for free

    • @yoursubconscious
      @yoursubconscious 5 месяцев назад

      @@kelvincudjoe8468 - I am happy to be corrected. I really don't mind. Though, they could have shown it still.

    • @yoursubconscious
      @yoursubconscious 5 месяцев назад

      @@sarahdrawz - wasn't aware. 🙏📌

  • @billhanna8838
    @billhanna8838 Год назад +4

    Met Kamala , I bet that was' Mind blowing full on conversation ?"

  • @instantkevlar4763
    @instantkevlar4763 11 месяцев назад

    Seth Rogan is so smart. He even knows AI.

  • @kawalier1
    @kawalier1 Год назад +3

    100.000 tokens more to fine tune this model with single prompt 😂 CEO of the stable diffusion has proper approach to set up a private domain in the area of private customization model customizations ,,😎

  • @TrishaCC
    @TrishaCC Месяц назад

    I Love Claude 🙋‍♀️💜💜💜

  • @The12thSeahorse
    @The12thSeahorse Год назад +7

    I would like to know, if AI is so wonderful and exciting, why can’t AI give out answers to the climate change problems?

    • @roberthuff3122
      @roberthuff3122 Год назад +1

      Turning to AI to give ‘answers’ to large complex problems is the short, direct path to tyranny. Think for yourself.

    • @billhanna8838
      @billhanna8838 Год назад

      @@roberthuff3122 thought Climate Change was planed for that ?

    • @jessieadore
      @jessieadore Год назад

      Be a AI, particularly the way they intend to scale it, contributes to climate change

    • @ChristianKleineidam
      @ChristianKleineidam Год назад +4

      "Climate change problems" is a series of a lot of different problems and AI is actually giving out answers that help with some of them. It for example helped Google reduce the energy they need to use to cool their data centers by 40% back in 2016.

    • @kavinho
      @kavinho Год назад +1

      It’s highly unlikely that the knowledge they’ve been training on can be extrapolated to give an answer.
      To be able to answer such fundemental unknown questions an AI models of today need to operate as agents in the physical world to be able to make scientific discoveries and make conclusions from them.

  • @raphael1808
    @raphael1808 4 месяца назад

    0:21 strong believe in 1: pour more computer into this model

  • @edwardj3070
    @edwardj3070 11 месяцев назад +1

    recent gen GPT looks to be truly disruptive of the workplace. that's a good thing. hope it puts 75 million people out of their somewhat worthless repetitious data processing jobs ASAP. force real questions about the economy. in the US, anyway, we had "enough" for everyone decades ago

  • @kevinr8431
    @kevinr8431 Год назад +1

    Would you be so much more helpful if you would show examples and let the product demo itself

  • @quakers200
    @quakers200 Год назад +2

    Do we even know to what extent these companies can be held liable for answers it provides? Oops we just figued out how to eliminate a third of our workforce.

    • @mughat
      @mughat Год назад

      Search up "Luddite". You might be one.

  • @theencryptedpartition4633
    @theencryptedpartition4633 8 месяцев назад

    Bruh, you're not making it easy for both. The more AI companies there will be up there, producers of stuff like GPU are simply gonna hike up the prices and it will be good for neither

  • @brasidas33
    @brasidas33 Год назад +3

    Short Amazon

  • @blackspetnaz2
    @blackspetnaz2 7 месяцев назад

    Short answer. MONEY! I saved you 14 min.

  • @7mikeraj
    @7mikeraj Год назад

    Good discussion!

  • @gmenezesdea
    @gmenezesdea 6 месяцев назад +1

    I don't trust anyone working in AI to have our best interests in mind.

  • @ivanf2782
    @ivanf2782 8 месяцев назад +1

    Why not puting his name on the Video? Why nor puting his name at the top of the video description? Come on! people who watch your videos are smarter and more interested in getting to know all people in the field, not only (clilcbating ) using others people names to get attention lis Sam Altman. Can you improve on that?

  • @AjaySharma-me1sy
    @AjaySharma-me1sy 11 месяцев назад

    Dario looks Dan Melcher from Silicon Valley (the guy whose wives (yes wives) Erlich Bachman sleeps with)

  • @vladimirmishkov9555
    @vladimirmishkov9555 7 дней назад

    bruh yesterday i asked it to finish my code for a java script and for some reason it gave me a full recipy on how to build a SQL virus or a backdoor or some shit. I just refreshed chat, because i had no interest in that but that was the first time in months that i was like: wait wtf. Did someone ask this and do I get his response or wtf happened? XD but all with all, i am big fan of anthropics models

  • @bigglyguy8429
    @bigglyguy8429 8 месяцев назад

    Welp, 11:50 made my mind up for me. Was considering switching my sub from GPT to Claude, but having heard his approval of censorship there's no way this Claude thing is getting my money. I'd pay double for GPT4 if uncensored. First company with the balls to do that wins.

  • @adamy4435
    @adamy4435 7 месяцев назад

    Wtf his name in the title 😢

  • @PankajDoharey
    @PankajDoharey 11 месяцев назад

    There was an oppurtunity to make money why wouldnt he go ahead any start a new company ? Though Anthropic is far Behind currently compared to OpenAI, but i think eventually everyone will catchup.

  • @ShadyRonin
    @ShadyRonin 11 месяцев назад +2

    I don’t trust this guy any more than I trust Sam. They’re all in it for a zero sum game of ultimate control. The idea of “keeping us safe” has been a ruse as old as human history

    • @alainportant6412
      @alainportant6412 7 месяцев назад

      Zuck will open source all that crap and make these two creepy dudes irrelevant.

  • @senju2024
    @senju2024 Год назад

    The so-called 100,000 token context window is now old school. MemGPT uses Virtual context via function calls which allows unlimited memory. I would not brag about the already limited token context window that he is boasting. But I will give some credit as this video is already a month old and that is old tech regarding AI progress.

    • @A5tr0101
      @A5tr0101 7 месяцев назад

      Sounds like a terriblly resource intensive and badly designed AI to me

  • @avidlearner8117
    @avidlearner8117 Год назад

    Yeah but…. Claude is wrong a LOT…. And often. Makes up stuff. And I mean Claude 2 100k version.

  • @SunnyNarinderSingh
    @SunnyNarinderSingh Год назад

    = GAN

  • @BigDataLogin
    @BigDataLogin Год назад

    Cool

  • @jialx
    @jialx 7 месяцев назад

    Is the guy who owns an AI company a pessimist or an optimist about the future of AI 🤪

  • @Goohuman
    @Goohuman Год назад

    My concern is regarding these guidelines or AI-rules as created by humans. You are simulating wisdom. I believe a person's core values, if they come from within or from other humans, will eventually fail in some critical way. My bias comes from Christianity and I believe wisdom comes from God, so wise people behave good towards others and defend good in word and deed, as I perceive my faith requires. The real problems will manifest as the AI must decide to lie or do something that could be perceived as bad in order to support good things. Such as lie to a person bent on doing bad and misdirecting them or violently taking down a person who is very likely to harm or kill other good people, or even doing nothing while bad people are being harmed, allowing some less intelligent human to own the consequences of their actions.
    I'm sure the smart people at Anthopic have considered these matters, but I know from experience that there won't be a rule or law that governs such a being as a super-intelligent, massively capable thing that this AI can become. Human wisdom applies to the single human with all the limitations of a human in place.
    I look forward to seeing what the coders at Anthropic do on that level.

  • @shanecarroll7523
    @shanecarroll7523 Год назад +1

    First

  • @dawncc1
    @dawncc1 Год назад

    So it a WOKE AI?…