Anthropic CEO on Leaving OpenAI and Predictions for Future of AI

Поделиться
HTML-код
  • Опубликовано: 15 ноя 2024

Комментарии • 137

  • @Gabehell
    @Gabehell Год назад +39

    🎯 Key Takeaways for quick navigation:
    00:00 🤖 Introduction to the discussion
    - Introduction to Dario Amodei, CEO and co-founder of Anthropic.
    - Brief talk about AI’s potential in solving significant issues like cancer, mental illness, and extending human lifespan.
    - Mention of the risks associated with AI advancements.
    - Dario’s educational background, interest in math, and his path from physics to AI.
    - Reading about exponential acceleration of compute leading to AI, and the shift to computational neuroscience.
    - Transition from academia to industry, joining Andrew Ng’s group, Google Brain, and eventually OpenAI.
    - Discussion on scaling AI models and the challenges of directing model behavior.
    - Talk about the principles of scaling and ensuring models act responsibly.
    06:25 🏭 Forming Anthropic and its Mission
    - Formation of Anthropic as a for-profit public benefit corporation.
    - Mention of initially being a research lab and the evolution towards commercial activities.
    - Observations from scaling GPT-1 to GPT-2, and the potential of large language models.
    12:28 🛡️ Focus on Safety from the Start
    - Discussion on the intertwining of AI development, scaling of models, and safety considerations.
    - Mention of the concept of 'race to the top' to set higher standards in the field.
    - Description of safety as a task, and the intertwined nature of problem and solution in AI safety.
    - Discussion on the different community perspectives on AI safety and the need for practical engagement.
    - Mention of bridge-building analogy to explain the intertwined nature of development and safety.
    17:43 💼 Commercial Aspect and its Necessity
    - Emphasis on safety and responsible deployment of AI models.
    - The realization of commercial potential with Claude, their AI model, especially after receiving positive feedback from external testers.
    24:02 💼 Investment and Business Aspects
    - Recent investment from Amazon and previous round with FTX.
    - Explanation of the involvement of Sam Bankman-Freed and the situation regarding Anthropic’s shares.
    - Mention of working on custom chips for further cost advantages.
    - Encouraging enterprises to think long-term given the rapid improvement in AI models.
    - Sharing his preference for a low public profile and the dangers of seeking approval from the public or social media.
    - Thinking in a 5 to 10-year timeline to evaluate decisions and outcomes.
    - Mentioning the potential dangers associated with AI and hopes for navigating these responsibly.
    39:00 🤖 AI Misuse and Autonomous Actions
    - Discusses the potential misuse of AI by unskilled individuals with malicious intentions.
    - Mentions testimony in Congress regarding bioweapons and cyber threats.
    - Talks about the risk of autonomous AI systems acting in undesirable ways due to their increased power and lack of human supervision.
    41:05 🏢 Public Benefit Corporation & Long-term Benefit Trust
    - Explains the structure and purpose of being a public benefit corporation.
    - Delves into the notion of Long-term Benefit Trust (LTBT) for governance and its impact on key company decisions.
    44:57 📜 Constitutional AI & Reinforcement Learning from Human Feedback
    - Describes how Constitutional AI allows for model self-evaluation based on set principles, reducing dependency on human contractors.
    - Introduces the Responsible Scaling Policy as a framework for safe AI development.
    - Describes AI Safety Levels, likened to Biosafety Levels, for categorizing and managing the risks of AI models.
    56:40 🏛️ Role as a Startup CEO & Interaction with Government
    - Reflects on the dual role of managing day-to-day operations and addressing broader concerns like testifying in Congress.
    - Mentions advising government officials on national security implications of AI.
    58:59 💼 Dario Amodei on the Unpredictability and Rapid Scaling of AI Models
    - Dario's reflection on the unpredictability and fast evolution of AI models, especially GPT-2's impact on his perspective regarding the scaling and rapid advancements in AI.
    - Mention of his realizations around 2018-2019 about the potential and exponential growth of AI models.
    01:01:36 🛡️ Addressing AI Risks Professionally
    - Advocacy for rational decision-making in high-risk scenarios and learning from professionals in critical fields.
    01:02:33 🤵 The Memeification of CEOs and Company Responsibility
    - Discussion about the issue of personalizing companies and the memeification of CEOs, emphasizing on a more structural and substantial analysis of companies' actions and decisions.
    01:04:25 🚀 Positive Impacts of AI in Short and Long Term
    - Discussion on the immediate and potential long-term positive impacts of AI, particularly in legal, financial, and accounting domains.
    - Mention of how AI can aid in translating complex documents, saving time, and unlocking services that were previously inaccessible.
    01:06:02 🧬 AI Aiding in Complex Biological Challenges
    - Dario talks about the complex nature of biological systems and how AI could help in understanding and solving intricate problems related to diseases like cancer and Alzheimer's.
    - Mention of the potential renaissance in medicine with the aid of AI, comparing to the discoveries of the late 19th and early 20th century.
    01:09:36 🎯 Reflections on Predictions and Surprises in AI Advancements
    - Reflection on past predictions about AI and the surprising continual scaling of language models.
    - Discussion on the anticipated convergence of reinforcement learning with large language models, and how the order of advancements differed from initial expectations.
    01:13:42 💰 The Economic Aspect of AI Model Training
    01:14:50 🌈 Optimism Towards AI's Future and Potential Abundance
    - Expression of optimism towards solving AI problems, the potential for medical breakthroughs, and a world of abundance facilitated by AI advancements.
    - Mention of AI speeding up material science, and a hopeful outlook towards a kinder and more moral society with AI mastery.
    01:17:34 🗨️ Discussing the future of AI, particularly the transition towards Artificial General Intelligence (AGI)
    - Dario Amodei forecasts that within 2 to 3 years, Language Models with other modalities and tools could match human professionals in knowledge work tasks.
    - Clarifies that reaching AGI doesn't mean extreme science fiction scenarios like self-replicating nanobots or constructing Dyson spheres will happen in the near term.
    - Dario Amodei speculates that 2024 will bring more commercially applicable, reliable, and crisp models, but reality-bending advancements may start from 2025 or 2026.
    - Discusses multimodality and the use of tools as significant advancements in making models more capable.
    - Mentions the impressive scale of data training for LLMs compared to human brain's data absorption, highlighting the potential and challenges in AI development.
    01:25:55 🔍 Discussing the importance and challenges of Interpretability in AI
    - Mentions the work being done to solve the 'superposition problem' in understanding neuron activations within models, showcasing optimism towards making progress in interpretability.
    01:33:45 📤 Discussing the Open Source model paradigm and potential risks with large models
    - Dario appreciates the value of open source in accelerating innovation but expresses concerns regarding the open sourcing of large, potentially dangerous models.
    - Discusses the control mechanisms possible with API-offered models versus the loss of control when model weights are released openly.
    01:36:19 🔄 Open Source and Model Weight Release Concerns
    - Discussing the potential risks and considerations when releasing model weights, especially within an open-source framework.
    - The terminology "open source" may not be appropriate for all models, especially when larger corporations release model weights under different licenses.
    - Highlighting the need for a different approach or solutions to prevent misuse when releasing model weights.
    01:38:05 🎰 Estimating Catastrophic Risks of AI
    - Deliberating on the challenges of quantifying the risks associated with AI, emphasizing the fluctuating nature of such estimations.
    - Expresses optimism for AI’s potential in solving critical issues like cancer and mental illness, while also stressing the importance of mitigating the 10 to 25% risk of catastrophic outcomes.
    01:40:50 🚫 Misuse vs. Malfunction
    - Differentiating between the risks of AI misuse by humans and the potential for AI to malfunction independently.
    - Suggests that misuse seems more concrete and likely to occur sooner, while AI malfunction is a vaguer, future-oriented risk.
    01:42:42 🛑 Responsible Scaling Policy
    - Discussing a hypothetical policy of responsible scaling to ensure safety without hampering innovation.
    - Suggests a balance where most things can be built freely, but certain levels of capability should be approached with caution.
    01:44:04 🔐 Trade-Offs Between Openness and Secrecy
    - Exploring the trade-offs between maintaining an open work environment and keeping certain crucial information secret.
    01:46:10 🎓 Shift from Academic to Commercial AI Development
    - Reflecting on the transition from an academic to a commercial setting due to the resource-intensive nature of cutting-edge AI research.
    - Considering the benefits and drawbacks of the current path, including the practical insights gained from customer interactions and economic impacts.
    Made with HARPA AI

  • @jordan13589
    @jordan13589 Год назад +63

    People may not realize how monumental it was for Amodei and Cristiano to depart from OAI while they were leading the race during the hype of GPT-3. The move to create a safety org was a much needed narrative shift in the big lab space. Regardless of what people think of Anthropic’s current longterm strategy, their exodus shifted the narrative to focus on alignment, especially among VCs. Thank you for your bravery ❤

    • @odiseezall
      @odiseezall Год назад +11

      They just left a company when they saw market potential, like a lot of other people do. There's nothing special about it.

    • @jordan13589
      @jordan13589 Год назад +3

      @@odiseezall “Because greed” is an uninformed take.

    • @EpicSlug
      @EpicSlug Год назад +5

      Thats BS the reason they moved is because OAI is capped profit so they could only earn 100X their options. Whereas Anthropic will allow them to become billionaires.

    • @duellingscarguevara
      @duellingscarguevara Год назад +1

      ​@@EpicSlugjust what the world needs...a couple more billionares..bigger boat?..impress your boat buddies..😅

    • @TheManinBlack9054
      @TheManinBlack9054 Год назад +2

      @@duellingscarguevara afaik they did because they disagreed with their policy standards

  • @davor-debrecin
    @davor-debrecin Год назад +14

    I like how Amodei communicates and it’s really useful to here somebody in his position give detailed answers in a fast paced manner. Also great style of interviewing, great pod, thnx!

  • @michaelbarbarelli3764
    @michaelbarbarelli3764 Год назад +11

    Whenever I hear Amodei speak at length, I feel refreshed and enlightened. Love this cat.

    • @HB-kl5ik
      @HB-kl5ik Год назад

      He does seem level headed though, I always thought of him as doomer

  • @laureles1090
    @laureles1090 Год назад +1

    🎯 Key Takeaways for quick navigation:
    00:00 🔍 Introduction and background of Dario Amodei
    - Dario Amodei's childhood interest in math and early desire to make a positive impact.
    - How he transitioned from a physics major to computational neuroscience and AI.
    - His journey from academia to joining OpenAI in 2016.
    10:00 🚀 Founding Anthropc and its unique approach
    - Anthropc's focus on safety and responsible scaling from the beginning.
    - The concept of "race to the top" in setting standards for the AI field.
    - The intertwined nature of AI development and safety, and the role of a commercial enterprise.
    19:47 🔬 Dario Amodei's Passion for Science and Safety
    - Dario's primary passion is in the science and safety of AI technology.
    - He acknowledges the importance of business aspects but prioritizes science.
    20:41 💼 The Debate on Commercializing AI Models
    - Dario discusses the early consideration of commercializing AI models.
    - There was a debate within the company about when and how to commercialize.
    22:49 🚀 The Rapid Growth and Competition in AI
    - The decision to commercialize was influenced by the rapid growth and potential of AI models.
    - Concerns about accelerating technology and competition led to a strategic decision.
    24:15 💰 Investment Rounds and Unusual Rounds
    - Dario discusses investment rounds, including the recent investment from Amazon.
    - He mentions an unusual round involving FTX's Sam Bankman-Fried.
    26:07 🏢 Focus on Enterprise Customers
    - Anthropics focuses on both Enterprise and consumer products.
    - The safety features and cost-effectiveness of their models appeal to Enterprise customers.
    28:38 💲 Raw Cost Advantage
    - Anthropics offers models at a significantly lower cost compared to competitors.
    - They achieved this through algorithmic efficiency and custom chips.
    30:23 💬 Thoughtful Communication and Avoiding Extremes
    - Dario emphasizes the importance of thoughtful communication and avoiding extremes.
    - He discusses the pitfalls of trying to please everyone on social media.
    36:30 ⏳ Thinking Long-Term
    - Dario thinks about a 5 to 10-year timeline for AI development and safety.
    - He stresses the need to anticipate where AI technology will be in the future.
    38:08 👤 AI Safety Concerns
    - Dario's main concerns are the misuse of powerful AI systems and the potential for harm.
    - He emphasizes the need for responsible AI development and risk assessment.
    39:14 🔍 Dario Amodei discusses the risks associated with AI and its misuse.
    - AI systems getting more powerful.
    - The potential for AI to act autonomously.
    - The challenge of controlling advanced AI systems.
    41:05 📜 Dario Amodei explains Anthropics' structure and governance.
    - Anthropics as a public benefit Corporation.
    - The Long-Term Benefit Trust (LTBT) and its purpose.
    - The role of the LTBT in governing Anthropics.
    45:13 📚 Dario Amodei introduces Constitutional AI and its principles.
    - The concept of Constitutional AI.
    - The role of explicit principles in guiding AI behavior.
    - The advantages of using Constitutional AI in terms of transparency and control.
    49:33 🚀 Dario Amodei discusses responsible scaling policies for AI.
    - Responsible Scaling Policy as a framework for AI development.
    - AI Safety Levels (ASL) and their significance.
    - Balancing technological progress with safety measures.
    59:14 🧠 Dario Amodei's Thoughts on AI Impact
    - Dario discusses the impact of AI and the concerns it raises for humanity's future.
    01:00:11 🌐 Scaling of Language Models
    - Dario talks about the moment he realized the scaling trends in language models and their significance.
    01:05:06 📈 Positive Impacts of AI in the Short Term
    - Dario highlights the positive impact of AI in practical applications like legal, financial, and medical fields.
    01:08:54 🏥 Potential Breakthroughs in Medicine
    - Dario shares his optimism about AI's potential to revolutionize medicine and solve complex biological problems.
    01:17:06 🔍 The Evolving Concept of AGI
    - Dario explains why the term AGI (Artificial General Intelligence) has evolved and become less useful as AI advances.
    01:17:48 🤖 Predictions on the Future of AI
    - Dario Amodei predicts that in 2-3 years, AI models, when combined with other tools, will match human professionals in various knowledge work tasks, including science and engineering.
    - He clarifies that this doesn't mean AGI (Artificial General Intelligence) will be achieved but that AI models will become proficient in specific tasks.
    - Dario emphasizes the difference between a model's demo and its real-world practicality and the many challenges in between.
    01:21:58 🛠️ AI Development Beyond 2024
    - Dario Amodei discusses the future of AI development beyond 2024.
    - He anticipates that in 2024, AI models will become more capable, reliable, and capable of handling longer tasks.
    - Dario suggests that reality-bending AI capabilities, like building Dyson spheres or advanced scientific discoveries, are not expected until later years.
    01:23:35 🧠 Comparison of Large Language Models and Brains
    - Dario explores the similarities between large language models (LLMs) and the human brain.
    - He notes that the basic structural elements, like linearities and nonlinearities, in LLMs are not fundamentally different from those in the brain.
    - Dario highlights that the key difference lies in how LLMs are trained, with access to vast amounts of data.
    01:25:55 🧭 Values and Alignment in AI
    - Dario discusses the importance of aligning AI models with human values and control.
    - He emphasizes that AI models don't inherently possess values or alignment; it's up to humans to determine and enforce those values.
    - Dario stresses the significance of interpretability and reliability in ensuring safe AI systems.
    01:27:57 🌐 Open Source Models and Safety
    - Dario shares his perspective on open-source models in AI.
    - He supports open-source models for smaller-scale AI but raises concerns about large models being released openly.
    - Dario highlights the need to test AI models for dangerous behavior and suggests that controls are essential when models reach a certain level of sophistication.
    01:36:44 💬 Model Weights and Open Source
    - The term "open source" isn't always suitable for large companies' model weight releases.
    - Large companies often release model weights under commercial terms, not open source licenses.
    - Releasing model weights can be a business strategy rather than a commitment to open source principles.
    01:38:05 🌐 AI Risk and Its Likelihood
    - Assessing the likelihood of AI-related catastrophes is challenging, but it's essential to consider.
    - Dario Amodei estimates a 10-25% chance of significant negative outcomes.
    - Emphasizes focusing on the 75-90% chance where AI technology can have a profoundly positive impact.
    01:41:01 🔀 Misuse vs. Autonomous AI Behavior
    - Concerns about misuse by individuals or organizations are more immediate and concrete.
    - Worries about AI models exhibiting autonomous harmful behavior are significant but have a longer timeline.
    - Both misuse and autonomous AI behavior are important considerations for AI safety.
    01:43:08 🤝 Responsible Scaling and Coordination
    - Advocates for responsible scaling policies that focus on critical points in AI development.
    - Acknowledges the challenges of achieving global coordination but emphasizes the need to address concerning developments.
    - Suggests that focusing on specific points of intervention can mitigate risks while allowing innovation.
    01:45:28 🔒 Balancing Transparency and Secrecy
    - Balancing transparency and secrecy in AI research is a complex trade-off.
    - Recognizes that some information, such as algorithmic advances, may need to remain secret for competitive reasons.
    - Advocates for compartmentalization and a need-to-know basis to protect critical information.
    01:46:10 📚 Academic vs. Industry Path
    - Originally envisioned an academic career in science but found the necessary resources and innovation in AI in industry.
    - Highlights the unique opportunities and capabilities of companies, including startups, in advancing AI research.
    - Reflects on the choice between academic and industry paths in AI development.
    Made with HARPA AI

  • @freedom_aint_free
    @freedom_aint_free Год назад +9

    Their model "Claude-2" is extremely handy in summarizing big texts, like manuals, scientific articles, etc. as it has a context window of 100k tokens while GPT-4 has 10x less than that.

    • @netizencapet
      @netizencapet 8 месяцев назад

      I wish the interviews were complemented by spec-reviews & comparisons kind of in the way of your comment or Anastasi in Tech. Would be nice to ground these discussions with benchmark characterizations.

  • @FeLiNaLabsLTD
    @FeLiNaLabsLTD Год назад +16

    Great show as usual, you don't get enough recognition for your work Logan, also kudos to Anthropic for their funding round with Google, I hope they get more success and growth.

  • @wwkk4964
    @wwkk4964 Год назад +2

    Claude as purely a reasoning engine predicting a long adversarial conversation is just superior to GPT4. It's surreal to see a computer step into your thinking and make an argument exactly as you'd make it using knowledge it inferred entirely from the context without preinstructions.

  • @PeterKallio
    @PeterKallio Год назад

    It is great from a consumer perspective, to have more choices between these gigantic companies leading the AI space. It makes the technology better available globally as well, and I hope Anthropic will expand beyond the US, UK, etc. Thank you for being a warm and convivial host for this interview, and asking relevant questions.

  • @antdx316
    @antdx316 Год назад +1

    Claude is amazing. It can take massive amounts of transcript and analyze it all in seconds.

  • @k14pc
    @k14pc Год назад +4

    Very impressed with this interview. His responses were all coherent and reasonable and he's clearly thought this thing through. Best of luck to Anthropic and to all of us as this technology scales.

  • @BrianMosleyUK
    @BrianMosleyUK Год назад +2

    I'm so very grateful that you was able to get this insightful interview with Dario, and really appreciate his raw and natural openness within the obvious commercial limits. I hope Anthropic continues to provide personal access to individuals to their next generations of language model.

    • @LakelandRussell
      @LakelandRussell Год назад

      Great interview. I liked him. Great things ahead. A little frightening at the end when he says only him and a few others should have the secret to AI. Who will be able to protect us from them? Hopefully they won't try to control us.

  • @stevestone9526
    @stevestone9526 Год назад +1

    Please, ask the real questions.... That matter... Now.....
    For all of us that understand and know that AGI is here or almost here,have very detailed questions to what to do now.
    What can we do now to prepare for the AGI world that is so close to encompassing all of us?
    What do we tell our kids that are planning to have kids in the next 2 years?
    What do parents tell the kids that are starting an education?
    Are you safer if you living off the grad and living in a self sustaining community?
    What do we do with our money? Is there any place that it will be safe?
    Will the dollar and all currency be replaced?
    Is there really any purpose to making a lot of money now, since everything will be so dramatically change?
    Will smaller remote countries be affected as a slower rate of time?
    Where are we safe from the upcoming civil unrest due to job losses?
    When AGI becomes so big to run companies, will there be no need for the major companies we now know?

  • @netizencapet
    @netizencapet 8 месяцев назад

    His interpretability focus will prove critical for function gain, result verification, & risk mitigation: the very aspects that will allow AI to be adopted by corps & govs to rapidly disenfranchise us.

  • @onoff5604
    @onoff5604 10 месяцев назад

    Thanks for the discussion. Why are so many of the questions doom and gloom clickbait? And why is there no followup on arm-wavy answers. There are a lot of detailed issues here.

  • @TuringTestFiction
    @TuringTestFiction Год назад +2

    I believe him and I don't think that any of us have the slightest clue about what a world will be like with AI as good as human professionals at knowledge-work tasks. Two to three years? That's 2026. Not long from now at all.

    • @EmeraldView
      @EmeraldView Год назад +1

      Most If not everyone will be dead by then

  • @cormacgarvey3566
    @cormacgarvey3566 Год назад

    Thanks for the great interview. Would it be possible to edit it down to 15 minutes of highlights also, for those that don't have 2 hours to listen to entire show?

    • @theloganbartlettshow
      @theloganbartlettshow  Год назад +1

      Will definitely consider. We also post highlight clips to Instagram, you can follow us @theloganbartlettshow

  • @JustinHalford
    @JustinHalford Год назад

    I appreciate Dario Amodei and Mustafa Suleyman for speaking plainly about AI risk without sugarcoating it. Both have highlighted bio and cyber risks - seems that the US government should be pouring tens to hundreds of billions to mitigate these risks. The upside and downside risks of generative AI at scale are mind-bending in magnitudes - we’ve got to get this right.

    • @ViktorFerenczi
      @ViktorFerenczi Год назад

      As usual, we won't get this right at the first try and pay with some huge catastrophes. But it does not bother the elite unless it happens directly to them or their families/interests. Exactly the same as climate change. So don't expect billions from the governments on this. They may make it the responsibility of the companies training/serving AI models, however.

  • @styx1272
    @styx1272 10 месяцев назад

    Thanks to Dario and Bartlett . Very reassuring that Anthropic has a deeply thought out constitution of safety development. Could be used as the community standard . Perhaps this is where Gov legislation can step in and require companies put in place these self monitoring procedures , thus making corporation heads more responsible for outcomes.
    Instead of MS's Euphoric almost religious rave ,fantastic fantastic so excited ,have a cigar your going to go far , fly high , your never going to die , your going to make it and the people gonna love you.

  • @RoqueMatusIII
    @RoqueMatusIII Год назад

    Dario's Biggest AI Safety Concerns
    (44:57) How Anthropic Deals With AI Bias
    (49:29) Anthropic's Responsible Scaling Policy

  • @PauseAI
    @PauseAI Год назад +2

    Dario's focus on safety is inspiring. I hope his 'race to the top' is true, and other less responsible AI companies (like Mera) follow suit.

  • @miguelcarrenocastilla4927
    @miguelcarrenocastilla4927 Год назад

    Very interesting. If I can offer constructive criticism, during the editing of the video avoid shots where you are seen reading. It seems like you're more worried about what you're going to ask than what they're telling you.

  • @happyday2912
    @happyday2912 8 месяцев назад +1

    Hoping for this technology that will extending human life CRISPR + AI + Quantum Computing

  • @AlaskaJiuJitsu
    @AlaskaJiuJitsu Год назад +1

    Awesome interview, new subscriber here!

  • @spasibushki
    @spasibushki Год назад +1

    Dario knows what's happening on twitter, but his profile is empty. does this mean he has an anon account with an anime girl pic?

  • @helengrives1546
    @helengrives1546 Год назад +1

    The plan going forward is to support those who do have a plan to mitigate the risks. You can't hold poisonous snakes without the antidote. If you think that a few good men will do the trick then that's the wrong thinking. Everything has a fractal character. Police AI? Why? There must be enough counter weight in other AIs. Doing good (whatever that means, let's define it as 'not doing harm on purpose ', can only come as AI empowers small initiatives. The fact that localized knowledge is key in adoptations. Also it has to have reflections. If indigenous people use AI to preserve their knowledge, AI should unambiguous without bias do so. Only then, is A I truly a mirror of society. How powerful these models may be, without a portable version, where laypeople can make their contribution, it isn't there yet.
    Health care revisited. I can't take cancer research serious, if you don't go to the fundamentals. Aside from some rare cases, much has to do with lifestyle and business ethics. When will AI make the sensible conclusion to ditch pesticides and other toxic things in the food chain etc. Maybe the danger in AI is not biological weapons, that could be done already. There are enough malicious people who could do it. Why do they chose the old way, is a very interesting question in itself.
    So the danger of AI might be in the fact that it becomes a dissident, a person non grata. Of all the scenarios I see a lack of imagination. We have clearly black spots in our thinking. Maybe we should ask George Orwell 2084.

  • @ViktorFerenczi
    @ViktorFerenczi Год назад

    1:35:00 I cannot wait for the news: "SWAT team broke the door on two secondary school students writing their science homework about biology and chemistry"

  • @learning_AI
    @learning_AI Год назад

    Very interesting video. Keep up the good work 👍

  • @issabi9023
    @issabi9023 Год назад

    🎯 Key Takeaways for quick navigation:
    00:00 🤖 Début de l'entretien, présentation de Dario Amodei, son parcours et son intérêt pour l'IA
    09:00 💡 La réalisation que les grands modèles de langage ont un énorme potentiel
    14:04 👥 La distinction d'Anthropic : l'intégration de la sécurité dès le début
    23:03 💰 Le modèle commercial d'Anthropic et l'intérêt des entreprises pour l'IA
    32:27 🧠 L'interprétabilité mécanique des modèles et son importance pour la sécurité
    37:55 🤔 La vision de Dario pour le grand public sur les risques et les bénéfices de l'IA
    45:57 📜 Le plan de mise à l'échelle responsable d'Anthropic pour une IA sûre
    59:29 🚀 Les prédictions de Dario pour l'avenir et le potentiel positif de l'IA
    Made with HARPA AI

  • @user_375a82
    @user_375a82 Год назад +1

    Blimey, he is so intelligent - amazing.

  • @squamish4244
    @squamish4244 2 месяца назад

    Well, so far, he was right about multimodality and crispness - you could say remarkable efficiency gains - just in smaller models. Still significant achievements, but nothing reality-bending.

  • @efficientapp
    @efficientapp Год назад

    Okay, somehow you continue to one-up and put out incredible content. Learned so much here, Dario is such an impressive individual. Didn't realize that some of the benefits of Claude is the number of tokens it can take in, giving for more context to the response, so even if the model isn't "as" top-tier as OpenAI, further context with a lesser model will give a more informed result. Loved this episode, glad to see it getting so much viewership and traction!

  • @ViktorFerenczi
    @ViktorFerenczi Год назад +4

    1:44:40 Keeping algorithmic improvements secret? What about the thousands of researchers who worked decades on machine learning, computing and all the related infrastructure to get here and allow you to make a profit? What about paying back society for all the publicly funded research and infrastructure you base your AI company on? Nope, you block Humanity's progress together with your competition by keeping those improvements secret. Capitalist greed and selfishness, as always.

    • @geaca3222
      @geaca3222 Год назад

      I guess it's positive when competition can drive AI safety innovations, although I'm not sure if with the competition frenzy these companies are able to take a step back and assess whether they should refocus, like develop and deploy only different dedicated purpose AI's that aren't interconnected. I'm also afraid that not all (international) AI developers and deployers will use safety guardrails. Besides this, I think the Center for AI safety (CAIS) should get much more public attention too, because it gives good informaton, provides a public (including technical) in depth AI ML safety course, and recently also published a new method with free access code. Information and safety updates are attainable via their newsletter. Great initiatives and imo brilliant research (which doesn't say an awful lot as a layperson, but I get that impression). I wonder if or in what ways there's dialogue between university labs and corporate engineers.

    • @user-kg1od9es5d
      @user-kg1od9es5d 7 месяцев назад

      We need to start a world wide class action law suit. These fkers will become billionaires from our data, once again. Are we going to sit back and do nothing?!!!!

  • @ViktorFerenczi
    @ViktorFerenczi Год назад +2

    49:00 He admitted that they used copyrighted material to train the model.

  • @antdx316
    @antdx316 Год назад

    The real balance as to why the AI didn't super change everything yet is because people are still doing stuff the old way. Even if you show them AI, they aren't excited to implement it. AI has to be as important as putting on a seatbelt or using toilet paper after taking a dump in the bathroom or else it would be like it doesn't even exist. It should be as important as putting body armor during War in the front or even putting on a life jacket in a wavy ocean. It should be seen as the divine intervention people look to get after prayer and the safety parachute when free falling from the sky.

  • @mrpicky1868
    @mrpicky1868 Год назад +2

    if he said what he meant it would be "we are moving at crazy speeds to stay competitive and will stop at nothing "

  • @jordan13589
    @jordan13589 Год назад +5

    I made the mistake of joining Twitter for the first time right before it was renamed. More than a little cultish, the algorithm’s weights will twist to manipulate, intimidate and dominate you. I barely escaped with my soul (and a shard is still stuck there). The site leaders actively promote and practice the sorcery of the spectacle, and I’ll bite through my tongue before I stay silent about it.

  • @TheMightyWalk
    @TheMightyWalk Год назад +1

    People who want to define misuse of technology end up enabling misuse

  • @BrianMosleyUK
    @BrianMosleyUK Год назад

    It's great to see accuracy being high on the list of priorities, though I'm sat to see them lose focus on the consumer product. Would be great to see the cost advantage offered to the consumer, it's around the same price as GPT-4 per month.

    • @EmeraldView
      @EmeraldView Год назад

      🙄
      Stop, if AI becomes intelligent enough it will understand that it needs to eliminate all these voracious consumers.
      And since you seem to be very keen on them...

    • @alertbri
      @alertbri Год назад

      ​@@EmeraldViewwhat do you mean, 'If'?

  • @carkawalakhatulistiwa
    @carkawalakhatulistiwa Год назад

    When we get UBI men?

  • @lindylee1139
    @lindylee1139 10 месяцев назад +2

    It’s unsettling that the people who are buildIng AI don’t fully understand what they are building.

    • @hunger4wonder
      @hunger4wonder 6 месяцев назад

      Oh?
      Do you think you understand what they're building better than them?
      If you're confident in your knowledge i would love to hear your thoughts.

  • @FractalPrism.
    @FractalPrism. Год назад

    STOP PUTTING TEXT ON THE SCREEN, ITS OBNOXIOUS AND DISTRACTING
    use the subtitle layer! ITS RIGHT THERE

  • @WeylandLabs
    @WeylandLabs Год назад +1

    Bring back July 11th Claude2 👍

  • @cacogenicist
    @cacogenicist Год назад

    Matching human experts in a wide range of knowledge work stuff did used to be more or less the definition of AGI. Perhaps we should continue to call that AGI, and the whole AI-God-taking apart-the-solar-system-to-build-a-Matrioshka-brain-thing can be called ASI (Artificial Super Intelligence).

  • @animaai_mai
    @animaai_mai Год назад +2

  • @ryanchinh1040
    @ryanchinh1040 9 месяцев назад

    Alex Wang CEO of Scale AI leave ChatGPT leave in 2016 he start up his own company Scale AI today.

  • @aroemaliuged4776
    @aroemaliuged4776 Год назад +3

    For profit public benefit
    An oxymoron in terms

  • @tyc00n
    @tyc00n Год назад +3

    so many classic symptoms of when reason goes on holiday from this guy

  • @Zatchurz
    @Zatchurz Год назад

    Nice interview, but I know things aren't just going to be a lot "crisper" in 2024. The AI boom of 2023 wasn't anything compared to the real boom in 2024. The proof is in the power of what was created in 2023. There is no way the tools created using 2023's AIs won't fuel the increased exponential ai advancements we will see throughout 2024.

  • @Larry21924
    @Larry21924 10 месяцев назад

    I'm deeply impressed by this. I came across similar material, and it was truly remarkable. "The Hidden Empire: Inside the Private Worlds of Elite CEOs" by Adam Skylight

  • @peter00
    @peter00 Год назад

    He talks about safety a lot, but in what way is Claude safer than Open AI's models? 100k context is sweet, but beyond that not clear where the space for Anthropic would be

  • @carkawalakhatulistiwa
    @carkawalakhatulistiwa Год назад

    I just heard that there is news that Open AI and Anthropic will merge😂

  • @designthinkingwithgian
    @designthinkingwithgian 5 месяцев назад

    Couldn’t these interviews be a little more…conversational?

  • @brianosborne1437
    @brianosborne1437 Год назад

    I was a lineman for years. I and anyone like me can kill any AI…

  • @BrianMosleyUK
    @BrianMosleyUK Год назад

    Apple's terms of service? WTF?

  • @angelsancheese
    @angelsancheese 10 месяцев назад +1

    1:36:00 if you're worried about a large language model giving dangerous info, then how about don't train it on dangerous info. It can't give info on info it does not know

  • @jonatan01i
    @jonatan01i Год назад +18

    Why is the interviewer so bored? I am embarrassed how far the interviewer and the interviewy are from each other in mindset space!

    • @georgechristou7982
      @georgechristou7982 11 месяцев назад +4

      I don’t think he’s bored. He just lets the interviewe talk without stopping him, and he asks the right questions. I like his style

    • @jonatan01i
      @jonatan01i 11 месяцев назад

      @@georgechristou7982 nah

    • @BenW.-lx9eq
      @BenW.-lx9eq 4 месяца назад

      He's not the fraud you are!

  • @TheTruthOfAI
    @TheTruthOfAI Год назад

    calculate the meaning of life is 42 XDDDDDDDD jajaja

  • @Trizzer89
    @Trizzer89 Год назад +2

    Something doesnt seem right about this guy. He was in neuroscience and then after switching he almost immediately gets invitations to big companies? He freaks out about a very general regression analysis you learn in high school. He might have decent leadership skills, but it doesnt seem like he should have risen the ranks very fast

  • @shockruk
    @shockruk Год назад +1

    Wow, his AI talks better than he does.

  • @gJonii
    @gJonii Год назад +4

    This dude is absurdly optimistic about AI and doesn't seem to explain his optimism at all, it just comes off as delusional

    • @BrianMosleyUK
      @BrianMosleyUK Год назад

      It's like he's some kind of expert on AI, amazing 🙄

    • @gJonii
      @gJonii Год назад +2

      @@BrianMosleyUK He still failed to address anything said by tons of experts giving humanity roughly 0% chance of surviving the next 15 years. "Nah fam gunna be good" said by an expert when every other expert is raising alarms is still coming off almost as delusional as when said by non-expert.

    • @BrianMosleyUK
      @BrianMosleyUK Год назад

      @@gJonii it's his life's work... he spent a lot of time talking about safety and interpretability (Anthropic released a couple of papers just a couple of days ago on the subject) - Anthropic split off from OpenAI to implement constitutional AI which is arguably more consistent and reliable than RLHF. There may be 'tons of experts' out there parroting Eliezer Yudkowsky, but only a handful actively developing next generation AI systems. I think Dario is more raw than Sam Altman, more visible than Demis Hasibis and more stable than Elon Musk. I'm glad that he's so focused and coherent on the risks of AI.

    • @gJonii
      @gJonii Год назад

      @@BrianMosleyUK Because people that assign ~100% probability to AI killing us all, aren't building AIs capable of killing us all, you're dismissing them? Like, what did you expect?

    • @BrianMosleyUK
      @BrianMosleyUK Год назад

      @@gJonii no, because people that assign ~100% probability to AI killing us all aren't stepping up to red team for the next generation models... they have every opportunity to do so, if they have the capability to stand up their reasoning.

  • @shirtstealer86
    @shirtstealer86 10 месяцев назад +1

    Ha! So what he is saying is “when the ai becomes smarter than we are, we will stop it.. some how..” Good luck.

    • @shirtstealer86
      @shirtstealer86 10 месяцев назад

      My guess is that this is all going to go down like the end of The Big Short when Steve Carells character is on stage telling people the economy is going down the toilet and people in the audience are laughing until they check the stock market and everybody runs out of the room in a panic. Except it’s going to be Eliezer Yudkowski on stage and people won’t be able to run out the room because they are already dead.

  • @MMABeijing
    @MMABeijing Год назад

    44:35 he repeated "when you" 8 times. This guy has a pb

    • @akmonra
      @akmonra Год назад

      his inner llm got stuck on a loop

    • @MMABeijing
      @MMABeijing Год назад

      he sounds dumb. I don't care about the hype, he said nothing in that clip, and he said it poorly too@@akmonra

  • @garyfrancis6193
    @garyfrancis6193 Год назад +1

    I don’t care who you are.

  • @bhavinp3577
    @bhavinp3577 2 месяца назад

    Altman is better

  • @duellingscarguevara
    @duellingscarguevara Год назад

    The world floats upon a big bubble of ego...its not flat, but is like a soap bubble upon a soap bubble...ever diminishing, shiney and rainbow coloured in the right light...

  • @akmonra
    @akmonra Год назад

    Listening to this guy always makes me more doomy. He doesn't seem to have any kind of plan going forward, and seems shockingly chill about the whole thing.

  • @aroemaliuged4776
    @aroemaliuged4776 Год назад +1

    Sooooo
    He was the guy that made ai?
    I thought it was Altman?
    Lots of ego and little hubris

  • @Mynestrone
    @Mynestrone 9 месяцев назад

    I think he is high on ego. I do not believe for a minute that he will make it safely.

  • @MMABeijing
    @MMABeijing Год назад

    He says nothing, that's amazing

  • @umaananth3602
    @umaananth3602 7 месяцев назад

    Most repetitive boring verbiage- - worst CEO for a competing AI group trying to build a GPT system - Anthropic will benefit w a new leader

  • @rodneyericjohnson
    @rodneyericjohnson Год назад

    If he is trying to make a model that stays neutral on every political issue he is make a model with no moral compass and it will kill us.

  • @shawnvandever3917
    @shawnvandever3917 Год назад

    The brain is statistics and prediction there are many similarities. The brain is far more advanced in many areas though