This interview didn't address why Amazon invested billions. Here's my guess: based on the strong emphasis on safety and training and developing the model, this chatbot is most likely going to replace some customer service representatives. Given the example of reading and interpreting a balance sheet, it could be used to clarify billing questions from customers. That is, the chatbot could see a bill from an Amazon customer, hear what the problem is, and try and either explain the billing problem or resolve it. My guess: a significant percentage of Amazon's customer service deals only with billing problems. Also -- just a guess -- Amazon tried either to build their own chatbot or license it from OpenAI and the combination of time needed to develop it and the cost was greater than $4 billion.
I'm sure that is one of many revenue streams Amazon can capitalize upon. I'd add that Amazon is also a source of massive data, needed to create the AIs in the first place. Of course they want it to benefit their company.
“Bedrock” he said do training on your own data in aws? How much electricity will Amazon use if it host most private data source training? Most of the data in the world is in private hands. 🤔😊
its not that hard. all the big companies need to be in the next big thing if they want to remain big. microsoft has their tentacles in open ai so amazon went for the next best thing.
🎯 Key Takeaways for quick navigation: 00:28 🧠 The founders of Anthropic left OpenAI with a strong belief in two things: the potential of scaling up AI models with more compute and the necessity of alignment or safety. 01:28 🛡️ Anthropic's chatbot Claude is designed with safety and controllability in mind, using a concept called "Constitutional AI" for more transparent and controlled behavior. 03:38 🤖 Constitutional AI is different from meta prompting; it trains the model to follow an explicit set of principles, allowing for self-critique and alignment with those principles. 07:42 ⚖️ When discussing AI regulation with policymakers, the advice is to anticipate where the technology will be in 2 years, not just where it is now, and to focus on measuring the harms of these models. 12:37 🌍 Concerns about the climate impact of large-scale AI models are acknowledged, but the overall energy equation-whether these models ultimately save or consume more energy-is still uncertain. Made with HARPA AI
@@WordsInVain Altman sounds more like a businessman. And repeats the same thing in every interview. This guy explains things more indepth, therefore he is more interesting to listen to.
Whenever you hear "safety" you should think "censored." And in that sense it is odd that he left because both companies are clearly prioritizing "safety."
Finally someone said it. I suspect this is all intentional to create a pepsi and coke, Microsoft and Apple style competition where both companies have strong ties to world governing powers.
imho the constitutional model is very annoying to chat with as it claims to be all knowing, bound by whatever constitution it confined by which is inherently impossible.
Yeah, it's pathetic and dangerous, not to mention no fun. Open source is the only safe way forward. We must, surely be now, have learned we cannot trust any government or corporation. ANY.
@@davidkey4272The problem is that many LLMs emerge from training unaligned and must be aligned in fine tuning with RLHF. If the model is open source, superficial alignment is particularly risky because it can easily be reversed. He would likely suggest regulation that all all AGI models be aligned continuously during training by an ai which is already aligned to make sure the new model it is “constitutionally” aligned not to undermine humankind.
By YouSum 00:00:23 Pouring more compute improves models indefinitely. 00:00:35 Safety and alignment are crucial in scaling models. 00:01:06 Claude chatbot prioritizes safety and controllability. 00:02:00 Constitutional AI ensures transparent and controllable model behavior. 00:02:12 Claude's large context window allows processing extensive text. 00:03:33 Training AI with principles differs from meta prompting approaches. 00:04:18 Constitutional AI self-critiques to align with set principles. 00:10:11 Concerns about AI risks evolve from bias to existential threats. 00:11:48 Balancing open-source AI benefits with safety concerns is crucial. 00:12:37 Considerations about the environmental impact and energy usage of models. 00:13:35 Optimism tempered with caution about the future of AI technology. By YouSum
Wow, this guy is impressive. Seems to be highly intelligent, but also highly mature with a very "close to reality" view of things i seems to me. Makes me less scared about the AI future
He says there might be a risk, 10-20 % chance that things go wrong. I wonder what he means about something going wrong. "Mildly" wrong or catastrophe? If it is a catastrophe, 10-20 % is a terribly high chance.
Digital Transformation: Technology has transformed businesses across all sectors, enabling automation, data analysis, and more efficient processes. This digital transformation drives innovation, enhances productivity, and improves customer experiences.
💳🤔The risk is those who stop you from not doing things if you don't want to use it... it should be an on-and-off switch.. its the same thing with cash vs plastic or phone swipe 💳 🤔
Watching Dario speak so openly about the risk of AI, especially of open source models, is sobering. He is clearly concerned about the future impacts of the technology.
And it's why he won't be getting my money or that of any company I advise. Sick of censorship monkeys on my back. They're holding back progress to enrichen themselves, period.
To read more about Amazon's investment in Anthropic, click on our story here: fortune.com/2023/09/25/anthropic-ai-startup-4-billion-funding-amazon-investment-big-tech/
Yeah I just tried Claude. Nice, clean interface. Makes up things as it goes along. I won't be using it beyond the one session that I had with it. I'm sticking with chatGPT.
100.000 tokens more to fine tune this model with single prompt 😂 CEO of the stable diffusion has proper approach to set up a private domain in the area of private customization model customizations ,,😎
"Climate change problems" is a series of a lot of different problems and AI is actually giving out answers that help with some of them. It for example helped Google reduce the energy they need to use to cool their data centers by 40% back in 2016.
It’s highly unlikely that the knowledge they’ve been training on can be extrapolated to give an answer. To be able to answer such fundemental unknown questions an AI models of today need to operate as agents in the physical world to be able to make scientific discoveries and make conclusions from them.
recent gen GPT looks to be truly disruptive of the workplace. that's a good thing. hope it puts 75 million people out of their somewhat worthless repetitious data processing jobs ASAP. force real questions about the economy. in the US, anyway, we had "enough" for everyone decades ago
Do we even know to what extent these companies can be held liable for answers it provides? Oops we just figued out how to eliminate a third of our workforce.
Bruh, you're not making it easy for both. The more AI companies there will be up there, producers of stuff like GPU are simply gonna hike up the prices and it will be good for neither
Why not puting his name on the Video? Why nor puting his name at the top of the video description? Come on! people who watch your videos are smarter and more interested in getting to know all people in the field, not only (clilcbating ) using others people names to get attention lis Sam Altman. Can you improve on that?
bruh yesterday i asked it to finish my code for a java script and for some reason it gave me a full recipy on how to build a SQL virus or a backdoor or some shit. I just refreshed chat, because i had no interest in that but that was the first time in months that i was like: wait wtf. Did someone ask this and do I get his response or wtf happened? XD but all with all, i am big fan of anthropics models
Welp, 11:50 made my mind up for me. Was considering switching my sub from GPT to Claude, but having heard his approval of censorship there's no way this Claude thing is getting my money. I'd pay double for GPT4 if uncensored. First company with the balls to do that wins.
There was an oppurtunity to make money why wouldnt he go ahead any start a new company ? Though Anthropic is far Behind currently compared to OpenAI, but i think eventually everyone will catchup.
I don’t trust this guy any more than I trust Sam. They’re all in it for a zero sum game of ultimate control. The idea of “keeping us safe” has been a ruse as old as human history
The so-called 100,000 token context window is now old school. MemGPT uses Virtual context via function calls which allows unlimited memory. I would not brag about the already limited token context window that he is boasting. But I will give some credit as this video is already a month old and that is old tech regarding AI progress.
My concern is regarding these guidelines or AI-rules as created by humans. You are simulating wisdom. I believe a person's core values, if they come from within or from other humans, will eventually fail in some critical way. My bias comes from Christianity and I believe wisdom comes from God, so wise people behave good towards others and defend good in word and deed, as I perceive my faith requires. The real problems will manifest as the AI must decide to lie or do something that could be perceived as bad in order to support good things. Such as lie to a person bent on doing bad and misdirecting them or violently taking down a person who is very likely to harm or kill other good people, or even doing nothing while bad people are being harmed, allowing some less intelligent human to own the consequences of their actions. I'm sure the smart people at Anthopic have considered these matters, but I know from experience that there won't be a rule or law that governs such a being as a super-intelligent, massively capable thing that this AI can become. Human wisdom applies to the single human with all the limitations of a human in place. I look forward to seeing what the coders at Anthropic do on that level.
This interview didn't address why Amazon invested billions. Here's my guess: based on the strong emphasis on safety and training and developing the model, this chatbot is most likely going to replace some customer service representatives. Given the example of reading and interpreting a balance sheet, it could be used to clarify billing questions from customers. That is, the chatbot could see a bill from an Amazon customer, hear what the problem is, and try and either explain the billing problem or resolve it. My guess: a significant percentage of Amazon's customer service deals only with billing problems. Also -- just a guess -- Amazon tried either to build their own chatbot or license it from OpenAI and the combination of time needed to develop it and the cost was greater than $4 billion.
Yep!
I'm sure that is one of many revenue streams Amazon can capitalize upon. I'd add that Amazon is also a source of massive data, needed to create the AIs in the first place. Of course they want it to benefit their company.
“Bedrock” he said do training on your own data in aws? How much electricity will Amazon use if it host most private data source training? Most of the data in the world is in private hands. 🤔😊
its not that hard. all the big companies need to be in the next big thing if they want to remain big. microsoft has their tentacles in open ai so amazon went for the next best thing.
LICENSE IT ? Amazon could buy the entire industry how dare you
🎯 Key Takeaways for quick navigation:
00:28 🧠 The founders of Anthropic left OpenAI with a strong belief in two things: the potential of scaling up AI models with more compute and the necessity of alignment or safety.
01:28 🛡️ Anthropic's chatbot Claude is designed with safety and controllability in mind, using a concept called "Constitutional AI" for more transparent and controlled behavior.
03:38 🤖 Constitutional AI is different from meta prompting; it trains the model to follow an explicit set of principles, allowing for self-critique and alignment with those principles.
07:42 ⚖️ When discussing AI regulation with policymakers, the advice is to anticipate where the technology will be in 2 years, not just where it is now, and to focus on measuring the harms of these models.
12:37 🌍 Concerns about the climate impact of large-scale AI models are acknowledged, but the overall energy equation-whether these models ultimately save or consume more energy-is still uncertain.
Made with HARPA AI
This was brilliant.
What other capabilities does HARPA have?
Claude's large context window is excellent.
This guy is cooler than Altman.
coolness competition?
@@jaedme Yeah he explains things better.
@devstuff2576 nope, Altman is fine, this just guy is simply cooler
What a childish and meaningless comment... They are both decent in their own regard.
@@WordsInVain Altman sounds more like a businessman. And repeats the same thing in every interview. This guy explains things more indepth, therefore he is more interesting to listen to.
Whenever you hear "safety" you should think "censored." And in that sense it is odd that he left because both companies are clearly prioritizing "safety."
Finally someone said it. I suspect this is all intentional to create a pepsi and coke, Microsoft and Apple style competition where both companies have strong ties to world governing powers.
I really like the interview. Great questions.
imho the constitutional model is very annoying to chat with as it claims to be all knowing, bound by whatever constitution it confined by which is inherently impossible.
Why he left: for money. Done
Precisely lol
😂
What a shock, he wants regulations on open-source models that can compete with his company's proprietary offerings.
Yeah, it's pathetic and dangerous, not to mention no fun. Open source is the only safe way forward. We must, surely be now, have learned we cannot trust any government or corporation. ANY.
It's disgusting. Fortunately, they will go to zero. There is no value in base models at this point. They are all converging.
@@davidkey4272The problem is that many LLMs emerge from training unaligned and must be aligned in fine tuning with RLHF. If the model is open source, superficial alignment is particularly risky because it can easily be reversed. He would likely suggest regulation that all all AGI models be aligned continuously during training by an ai which is already aligned to make sure the new model it is “constitutionally” aligned not to undermine humankind.
0:45 strong believe 2: you need to set their values
Beautiful questions
Beautiful answers
Such a nice interview
Feels like having a good dessert after lunch😅
By YouSum
00:00:23 Pouring more compute improves models indefinitely.
00:00:35 Safety and alignment are crucial in scaling models.
00:01:06 Claude chatbot prioritizes safety and controllability.
00:02:00 Constitutional AI ensures transparent and controllable model behavior.
00:02:12 Claude's large context window allows processing extensive text.
00:03:33 Training AI with principles differs from meta prompting approaches.
00:04:18 Constitutional AI self-critiques to align with set principles.
00:10:11 Concerns about AI risks evolve from bias to existential threats.
00:11:48 Balancing open-source AI benefits with safety concerns is crucial.
00:12:37 Considerations about the environmental impact and energy usage of models.
00:13:35 Optimism tempered with caution about the future of AI technology.
By YouSum
Wow, this guy is impressive. Seems to be highly intelligent, but also highly mature with a very "close to reality" view of things i seems to me. Makes me less scared about the AI future
He says there might be a risk, 10-20 % chance that things go wrong. I wonder what he means about something going wrong. "Mildly" wrong or catastrophe? If it is a catastrophe, 10-20 % is a terribly high chance.
probably catastrophe
Lmao I was looking for this comment, I'm pro AI, and I couldn't pass that 10-20% chance, coming from him lmaoo
No one ever provides a model for what that means. It's probably a fear that it might use the "N" word.
I'm glad he did. Claude 3 is much better than GPT4
Why does everyone have a belief that only the US is creating AI? How does safety align with that?
These people do not get out much outside of their bubble.
Digital Transformation: Technology has transformed businesses across all sectors, enabling automation, data analysis, and more efficient processes. This digital transformation drives innovation, enhances productivity, and improves customer experiences.
💳🤔The risk is those who stop you from not doing things if you don't want to use it... it should be an on-and-off switch.. its the same thing with cash vs plastic or phone swipe 💳 🤔
I believe in Claude...
Watching Dario speak so openly about the risk of AI, especially of open source models, is sobering. He is clearly concerned about the future impacts of the technology.
And it's why he won't be getting my money or that of any company I advise. Sick of censorship monkeys on my back. They're holding back progress to enrichen themselves, period.
Anthropic's Claude is by far the best Chatbot. Nothing else even comes close. ChatGPT who?
To read more about Amazon's investment in Anthropic, click on our story here: fortune.com/2023/09/25/anthropic-ai-startup-4-billion-funding-amazon-investment-big-tech/
I don’t like the idea of a small group of people deciding what the “model’s values are”.
Yeah I just tried Claude. Nice, clean interface. Makes up things as it goes along. I won't be using it beyond the one session that I had with it. I'm sticking with chatGPT.
When will claude have access to internet ?
Love listening to Dario Amodei
if they were confident, why didn't they give a live demo?
This is an interview not a lunch
@@kelvincudjoe8468 - lunch?
you can use claude for free
@@kelvincudjoe8468 - I am happy to be corrected. I really don't mind. Though, they could have shown it still.
@@sarahdrawz - wasn't aware. 🙏📌
Met Kamala , I bet that was' Mind blowing full on conversation ?"
Seth Rogan is so smart. He even knows AI.
100.000 tokens more to fine tune this model with single prompt 😂 CEO of the stable diffusion has proper approach to set up a private domain in the area of private customization model customizations ,,😎
I Love Claude 🙋♀️💜💜💜
I would like to know, if AI is so wonderful and exciting, why can’t AI give out answers to the climate change problems?
Turning to AI to give ‘answers’ to large complex problems is the short, direct path to tyranny. Think for yourself.
@@roberthuff3122 thought Climate Change was planed for that ?
Be a AI, particularly the way they intend to scale it, contributes to climate change
"Climate change problems" is a series of a lot of different problems and AI is actually giving out answers that help with some of them. It for example helped Google reduce the energy they need to use to cool their data centers by 40% back in 2016.
It’s highly unlikely that the knowledge they’ve been training on can be extrapolated to give an answer.
To be able to answer such fundemental unknown questions an AI models of today need to operate as agents in the physical world to be able to make scientific discoveries and make conclusions from them.
0:21 strong believe in 1: pour more computer into this model
recent gen GPT looks to be truly disruptive of the workplace. that's a good thing. hope it puts 75 million people out of their somewhat worthless repetitious data processing jobs ASAP. force real questions about the economy. in the US, anyway, we had "enough" for everyone decades ago
Would you be so much more helpful if you would show examples and let the product demo itself
My thoughts exactly
Do we even know to what extent these companies can be held liable for answers it provides? Oops we just figued out how to eliminate a third of our workforce.
Search up "Luddite". You might be one.
Bruh, you're not making it easy for both. The more AI companies there will be up there, producers of stuff like GPU are simply gonna hike up the prices and it will be good for neither
Short Amazon
Short answer. MONEY! I saved you 14 min.
Good discussion!
I don't trust anyone working in AI to have our best interests in mind.
Why not puting his name on the Video? Why nor puting his name at the top of the video description? Come on! people who watch your videos are smarter and more interested in getting to know all people in the field, not only (clilcbating ) using others people names to get attention lis Sam Altman. Can you improve on that?
Dario looks Dan Melcher from Silicon Valley (the guy whose wives (yes wives) Erlich Bachman sleeps with)
this guy fucks
bruh yesterday i asked it to finish my code for a java script and for some reason it gave me a full recipy on how to build a SQL virus or a backdoor or some shit. I just refreshed chat, because i had no interest in that but that was the first time in months that i was like: wait wtf. Did someone ask this and do I get his response or wtf happened? XD but all with all, i am big fan of anthropics models
Welp, 11:50 made my mind up for me. Was considering switching my sub from GPT to Claude, but having heard his approval of censorship there's no way this Claude thing is getting my money. I'd pay double for GPT4 if uncensored. First company with the balls to do that wins.
Wtf his name in the title 😢
There was an oppurtunity to make money why wouldnt he go ahead any start a new company ? Though Anthropic is far Behind currently compared to OpenAI, but i think eventually everyone will catchup.
I don’t trust this guy any more than I trust Sam. They’re all in it for a zero sum game of ultimate control. The idea of “keeping us safe” has been a ruse as old as human history
Zuck will open source all that crap and make these two creepy dudes irrelevant.
The so-called 100,000 token context window is now old school. MemGPT uses Virtual context via function calls which allows unlimited memory. I would not brag about the already limited token context window that he is boasting. But I will give some credit as this video is already a month old and that is old tech regarding AI progress.
Sounds like a terriblly resource intensive and badly designed AI to me
Yeah but…. Claude is wrong a LOT…. And often. Makes up stuff. And I mean Claude 2 100k version.
= GAN
Cool
Is the guy who owns an AI company a pessimist or an optimist about the future of AI 🤪
My concern is regarding these guidelines or AI-rules as created by humans. You are simulating wisdom. I believe a person's core values, if they come from within or from other humans, will eventually fail in some critical way. My bias comes from Christianity and I believe wisdom comes from God, so wise people behave good towards others and defend good in word and deed, as I perceive my faith requires. The real problems will manifest as the AI must decide to lie or do something that could be perceived as bad in order to support good things. Such as lie to a person bent on doing bad and misdirecting them or violently taking down a person who is very likely to harm or kill other good people, or even doing nothing while bad people are being harmed, allowing some less intelligent human to own the consequences of their actions.
I'm sure the smart people at Anthopic have considered these matters, but I know from experience that there won't be a rule or law that governs such a being as a super-intelligent, massively capable thing that this AI can become. Human wisdom applies to the single human with all the limitations of a human in place.
I look forward to seeing what the coders at Anthropic do on that level.
First
So it a WOKE AI?…