Hi to everyone coming in from news coverage of this interview. To listen to the full episode, and others like it, subscribe to Big Technology Podcast on your podcast app of choice: Spotify: spoti.fi/32aZGZx Apple: apple.co/3AebxCK Etc. pod.link/1522960417/
there is a reason why pretty much the whole gpt 2 and 3 team left openai to found anthropic dario amodei, ceo of anthropic was the team leader for gpt2 and 3
Confidentiality in the context of this story can mean two things: 1- the preservation of secrets that the employee is aware of and refuses to reveal because he is more concerned about his own ass; 2- the preservation of secrets that do not really represent an existential danger to humanity, and whose non-disclosure guarantees an aura of mystery and importance around the employee who says "I can't reveal this because I signed a confidentiality agreement". With the information that has been discussed here, it is not possible for anyone to choose which scenario is really occurring. However, we know that Americans are very fond of bullshit and personal self-promotion based on bullshit. So, the second hypothesis seems more likely to me than the first.
00:02 Concerns about OpenAI's trajectory 02:15 Prioritizing product over safety concerns in AI development 07:02 Challenges in trusting advice from highly intelligent AI systems 09:04 Employees advocating for right to publicly voice concerns 13:35 Former OpenAI safety employees express concerns about the future path of the company. 15:38 Challenges in monitoring language model behavior for safety concerns 19:24 Importance of external oversight for addressing harm caused by technology companies. 21:23 Ex-OpenAI employees raised concerns about illegal agreements 25:10 Ex-OpenAI Safety Employees Concerns 27:03 Advocating for new regulatory agency and AI whistleblower protection rule. 30:41 Proposed model for submitting whistleblower complaints 32:39 Regulating AI would require new legislation 36:11 Concerns about lack of understanding and control over advanced AI models 38:14 Concerns about the lack of serious preparation for AGI risks 42:15 Balancing legal rights and responsibilities in decision-making 44:23 Concerns about AI safety and accountability 48:17 Concerns about support for safety work and testing before launch 50:15 Potential risks of widespread AI systems without proper safeguards Crafted by Merlin AI.
Thank you! Please spread the word! Big Technology Podcast audio edition will do 1 million+ downloads this year as well. I am so glad people choose to tune in, and will keep putting these videos out :)
I'm appaled at how seemingly the smartest people in the world have no real clue about past events, like the sinking of the titanic. These guys are "building the future"?
Hi to everyone coming in from news coverage of this interview. To listen to the full episode, and others like it, subscribe to Big Technology Podcast on your podcast app of choice:
Spotify: spoti.fi/32aZGZx
Apple: apple.co/3AebxCK
Etc. pod.link/1522960417/
there is a reason why pretty much the whole gpt 2 and 3 team left openai to found anthropic
dario amodei, ceo of anthropic was the team leader for gpt2 and 3
Confidentiality in the context of this story can mean two things: 1- the preservation of secrets that the employee is aware of and refuses to reveal because he is more concerned about his own ass; 2- the preservation of secrets that do not really represent an existential danger to humanity, and whose non-disclosure guarantees an aura of mystery and importance around the employee who says "I can't reveal this because I signed a confidentiality agreement".
With the information that has been discussed here, it is not possible for anyone to choose which scenario is really occurring. However, we know that Americans are very fond of bullshit and personal self-promotion based on bullshit. So, the second hypothesis seems more likely to me than the first.
00:02 Concerns about OpenAI's trajectory
02:15 Prioritizing product over safety concerns in AI development
07:02 Challenges in trusting advice from highly intelligent AI systems
09:04 Employees advocating for right to publicly voice concerns
13:35 Former OpenAI safety employees express concerns about the future path of the company.
15:38 Challenges in monitoring language model behavior for safety concerns
19:24 Importance of external oversight for addressing harm caused by technology companies.
21:23 Ex-OpenAI employees raised concerns about illegal agreements
25:10 Ex-OpenAI Safety Employees Concerns
27:03 Advocating for new regulatory agency and AI whistleblower protection rule.
30:41 Proposed model for submitting whistleblower complaints
32:39 Regulating AI would require new legislation
36:11 Concerns about lack of understanding and control over advanced AI models
38:14 Concerns about the lack of serious preparation for AGI risks
42:15 Balancing legal rights and responsibilities in decision-making
44:23 Concerns about AI safety and accountability
48:17 Concerns about support for safety work and testing before launch
50:15 Potential risks of widespread AI systems without proper safeguards
Crafted by Merlin AI.
Alex you are a good podcaster, you deserve much more subscribers than you currently have!
Thank you! Please spread the word! Big Technology Podcast audio edition will do 1 million+ downloads this year as well. I am so glad people choose to tune in, and will keep putting these videos out :)
Mr. Kantrowitz asked the right question. Where is the fire?
Hey Google, what does a cytokine storm look like?
I'm appaled at how seemingly the smartest people in the world have no real clue about past events, like the sinking of the titanic. These guys are "building the future"?
This guy's scared of his own shadow.
I tuned out when he kept talking about "disinformation."
this is alarming.
❤