KairosFM
KairosFM
  • Видео 47
  • Просмотров 786
#23 - National Security and Surviving Disinformation with Ryan McBeth
Become a Patron on Patreon (www.patreon.com/21stTalks) and support the show!
Check out Ryan McBeth (www.youtube.com/@RyanMcBethProgramming) 's channel if you enjoyed the conversation!
Ryan McBeth, a software engineer and cybersecurity expert, discusses his work in combating disinformation and misinformation online. He explains his methodology of using Occam's razor to analyze information and debunk conspiracy theories. Ryan also talks about the importance of internet literacy and the promotion of critical thinking. He shares examples of disinformation he has encountered, particularly in relation to the conflict between Israel and Palestine. Ryan emphasizes the need to take information war...
Просмотров: 44

Видео

US National Security Memorandum on AI, Oct 2024
Просмотров 125День назад
October 2024 saw a National Security Memorandum and US framework for using AI in national security contexts. We go through the content so you don't have to, pull out the important bits, and summarize our main takeaways. • (00:48) - The memorandum • (06:28) - What the press is saying • (10:39) - What's in the text • (13:48) - Potential harms • (17:32) - Miscellaneous notable stuff • (31:11) - Wh...
Understanding Claude 3.5 Sonnet (New)
Просмотров 1514 дней назад
Frontier developers continue their war on sane versioning schema to bring us Claude 3.5 Sonnet (New), along with "computer use" capabilities. We discuss not only the new model, but also why Anthropic may have released this model and tool combination now. • (00:00) - Intro • (00:22) - Hot off the press • (05:03) - Claude 3.5 Sonnet (New) Two 'o' 3000 • (09:23) - Breaking down "computer use" • (1...
Winter is Coming for OpenAI
Просмотров 1021 день назад
Brace yourselves, winter is coming for OpenAI - atleast, that's what we think. In this episode we look at OpenAI's recent massive funding round and ask "why would anyone want to fund a company that is set to lose net 5 billion USD for 2024?" We scrape through a whole lot of muck to find the meaningful signals in all this news, and there is a lot of it, so get ready! • (00:00) - Intro • (00:28) ...
Open Source AI and 2024 Nobel Prizes
Просмотров 12Месяц назад
The Open Source AI Definition is out after years of drafting, will it reestablish brand meaning for the “Open Source” term? Also, the 2024 Nobel Prizes in Physics and Chemistry are heavily tied to AI; we scrutinize not only this year's prizes, but also Nobel Prizes as a concept. • (00:00) - Intro • (00:30) - Hot off the press • (03:45) - Open Source AI background • (10:30) - Definitions and cha...
SB1047
Просмотров 269Месяц назад
Why is Mark Ruffalo talking about SB1047, and what is it anyway? Tune in for our thoughts on the now vetoed California legislation that had Big Tech scared. • (00:00) - Intro • (00:31) - Updates from a relatively slow week • (03:32) - Disclaimer: SB1047 vetoed during recording (still worth a listen) • (05:24) - What is SB1047 • (12:30) - Definitions • (17:18) - Understanding the bill • (28:42) ...
OpenAI's o1, aka. Strawberry
Просмотров 45Месяц назад
OpenAI's new model is out, and we are going to have to rake through a lot of muck to get the value out of this one! ⚠ Opt out of LinkedIn's GenAI scraping ➡️ lnkd.in/epziUeTi • (00:00) - Intro • (00:25) - Other recent news • (02:57) - Hot off the press • (03:58) - Why might someone care? • (04:52) - What is it? • (06:49) - How is it being sold? • (10:45) - How do they explain it, technically? •...
INTERVIEW: Scaling Democracy w/ (Dr.) Igor Krawczuk
Просмотров 205 месяцев назад
The almost Dr. Igor Krawczuk joins me for what is the equivalent of 4 of my previous episodes. We get into all the classics: eugenics, capitalism, philosophical toads... Need I say more? If you're interested in connecting with Igor, head on over to his website (krawczuk.eu/) , or check out placeholder for thesis (github.com/into-ai-safety/into-ai-safety.github.io/blob/master/_posts) (it isn't p...
#22 - Superintelligence, AI, and Extinction with Darren McKee
Просмотров 10Месяц назад
On this episode of the podcast, Coleman sits down with Darren McKee to discuss his book Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World (www.darrenmckee.info/uncontrollable-the-threat-of-artificial-superintelligence-and-the-race-to-save-the-world) . The two discuss the central case for concern surrounding AI risk, deep fakes, and Darren’s approach to un...
INTERVIEW: StakeOut.AI w/ Dr. Peter Park (3)
Просмотров 207 месяцев назад
As always, the best things come in 3s: dimensions, musketeers, pyramids, and... 3 installments of my interview with Dr. Peter Park, an AI Existential Safety Post-doctoral Fellow working with Dr. Max Tegmark at MIT. As you may have ascertained from the previous two segments of the interview, Dr. Park cofounded StakeOut.AI (www.stakeout.ai) along with Harry Luk and one other cofounder whose name ...
INTERVIEW: StakeOut.AI w/ Dr. Peter Park (2)
Просмотров 207 месяцев назад
Join me for round 2 with Dr. Peter Park, an AI Existential Safety Postdoctoral Fellow working with Dr. Max Tegmark at MIT. Dr. Park was a cofounder of StakeOut.AI (www.stakeout.ai) , a non-profit focused on making AI go well for humans, along with Harry Luk and one other individual, whose name has been removed due to requirements of her current position. In addition to the normal links, I wante...
MINISODE: Restructure Vol. 2
Просмотров 18 месяцев назад
UPDATE: Contrary to what I say in this episode, I won't be removing any episodes that are already published from the podcast RSS feed. After getting some advice and reflecting more on my own personal goals, I have decided to shift the direction of the podcast towards accessible content regarding "AI" instead of the show's original focus. I will still be releasing what I am calling research ride...
INTERVIEW: StakeOut.AI w/ Dr. Peter Park (1)
Просмотров 248 месяцев назад
Dr. Peter Park is an AI Existential Safety Postdoctoral Fellow working with Dr. Max Tegmark at MIT. In conjunction with Harry Luk and one other cofounder, he founded ⁠StakeOut.AI (www.stakeout.ai/) , a non-profit focused on making AI go well for humans. 00:54 - Intro 03:15 - Dr. Park, x-risk, and AGI 08:55 - StakeOut.AI 12:05 - Governance scorecard 19:34 - Hollywood webinar 22:02 - Regulations....
MINISODE: "LLMs, a Survey"
Просмотров 78 месяцев назад
Take a trip with me through the paper Large Language Models, A Survey (arxiv.org/abs/2402.06196) , published on February 9th of 2024. All figures and tables mentioned throughout the episode can be found on the Into AI Safety podcast website (into-ai-safety.github.io) . 00:36 - Intro and authors 01:50 - My takes and paper structure 04:40 - Getting to LLMs 07:27 - Defining LLMs & emergence 12:12 ...
FEEDBACK: Applying for Funding w/ Esben Kran
Просмотров 218 месяцев назад
Esben reviews an application that I would soon submit for Open Philanthropy's Career Transitition Funding opportunity. Although I didn't end up receiving the funding, I do think that this episode can be a valuable resource for both others and myself when applying for funding in the future. Head over to Apart Research's website (apartresearch.com) to check out their work, or the Alignment Jam we...
MINISODE: Reading a Research Paper
Просмотров 59 месяцев назад
MINISODE: Reading a Research Paper
HACKATHON: Evals November 2023 (2)
Просмотров 49 месяцев назад
HACKATHON: Evals November 2023 (2)
MINISODE: Portfolios
9 месяцев назад
MINISODE: Portfolios
INTERVIEW: Polysemanticity w/ Dr. Darryl Wright
Просмотров 129 месяцев назад
INTERVIEW: Polysemanticity w/ Dr. Darryl Wright
MINISODE: Starting a Podcast
9 месяцев назад
MINISODE: Starting a Podcast
HACKATHON: Evals November 2023 (1)
Просмотров 49 месяцев назад
HACKATHON: Evals November 2023 (1)
MINISODE: Staying Up-to-Date in AI
Просмотров 19 месяцев назад
MINISODE: Staying Up-to-Date in AI
INTERVIEW: Applications w/ Alice Rigg
Просмотров 239 месяцев назад
INTERVIEW: Applications w/ Alice Rigg
MINISODE: Program Applications (Winter 2024)
Просмотров 39 месяцев назад
MINISODE: Program Applications (Winter 2024)
MINISODE: EAG Takeaways (Boston 2023)
Просмотров 29 месяцев назад
MINISODE: EAG Takeaways (Boston 2023)
FEEDBACK: AISC Proposal w/ Remmelt Ellen
Просмотров 99 месяцев назад
FEEDBACK: AISC Proposal w/ Remmelt Ellen
MINISODE: Introduction and Motivation
Просмотров 49 месяцев назад
MINISODE: Introduction and Motivation
#20 - Animal Ethics and the Moral Weight Project with Bob Fischer
Просмотров 3Месяц назад
#20 - Animal Ethics and the Moral Weight Project with Bob Fischer
#19 - Tobias Baumann on S-Risks and Avoiding the Worst for Humanity
Просмотров 22Месяц назад
#19 - Tobias Baumann on S-Risks and Avoiding the Worst for Humanity
#18 - Anders Sandberg: Longtermism, Transhumanism, and Making Sense of the Existential Risk
Просмотров 14Месяц назад
#18 - Anders Sandberg: Longtermism, Transhumanism, and Making Sense of the Existential Risk