To totally disregard the risks by bypassing the notion of malign human actors utilising this technology is not sensible. We've already seen massive political destabilisation through Cambridge Analytica, this was even before the application of modern AI tech. This is just blind optimism. Yes, the benefits can be massive but we also need to see that the potential negatives. The effects of these paradigm shifts have been getting progressively larger over the past 100 years. It at least deserves serious engagement with how to limit potential harm, rather than covering your ears and barrelling in.
Why do we need to predefine risks? Every technology ever introduced started in small scale and grew with iterations. As the scope changed, the challenges changed, but we still get to largely useful and great tools. Worrying about hypotheticals will slow down progress and lead you down wild goose changes unrelated to the actual subject.
Okay, man. The question for you. If someone decides to use it for bad purposes in Russia, China, or other countries... How you can stop this process through so-called regulation? Yes, it can be used with bad intention but it's impossible to stop.
An AI apocalypse does _not_ require anthropomorphization. Calling people irrational cultists is _not_ a strong argument against their position. Andreessen does not appear to have seriously considered the problem, and Roberts does not seem to have understood what some of his other guests have explained to him.
Great one this week Russ. Listening to Marc is always so insightful. His analogy to the AI fear mongering being like a cult is spot on (thinking of Sam Harris here).
Yeah, take what a VC says about AI risk with a grain of salt. They have every incentive to be behind the ball with this technology as it is existential to their firms existence if they miss it.
@@davidw8668 Not sure who "all" is in the comment, but seems to be carrying a LOT of weight. It clearly excludes anyone discussing ML safety with half a clue
By throwing out the precautionary principle he’s talking his own book of business. The reason this principle has been invoked in modern times is because the blast radius of our technology has greatly expanded. There should be no hurry.
There are millions of you. How are you different from those who believe in Jesus Christ or an afterlife? All of you believers have one thing in common. Global incompetence.
a good way to listen to this is on a podcast app that has the feature to slow down the speech; I listened to this episode at 0.8 speed; it is still faster than average talk
Regarding the 35min mark on how some believe consciousness will emerge out of complex/unpredictable machines.. I think we've become seduced by the power of technology.. where we've applied the mechanistic/materialist approach so effectively , that we think we could describe all of reality this way ( i.e. humans are nothing more than sophisticated machines and that matter gives rise to consciousness). BTW, I recommend how Naval Ravikant explains it on the Knowledge Project Podcast at the 1h mark (episode 18 or 171)
No one who speaks that fast thinks things are going to move slowly
LOL
To totally disregard the risks by bypassing the notion of malign human actors utilising this technology is not sensible. We've already seen massive political destabilisation through Cambridge Analytica, this was even before the application of modern AI tech.
This is just blind optimism. Yes, the benefits can be massive but we also need to see that the potential negatives. The effects of these paradigm shifts have been getting progressively larger over the past 100 years. It at least deserves serious engagement with how to limit potential harm, rather than covering your ears and barrelling in.
Why do we need to predefine risks? Every technology ever introduced started in small scale and grew with iterations. As the scope changed, the challenges changed, but we still get to largely useful and great tools. Worrying about hypotheticals will slow down progress and lead you down wild goose changes unrelated to the actual subject.
Okay, man. The question for you. If someone decides to use it for bad purposes in Russia, China, or other countries... How you can stop this process through so-called regulation? Yes, it can be used with bad intention but it's impossible to stop.
An AI apocalypse does _not_ require anthropomorphization. Calling people irrational cultists is _not_ a strong argument against their position.
Andreessen does not appear to have seriously considered the problem, and Roberts does not seem to have understood what some of his other guests have explained to him.
They did say that they weren’t even steelman-ing the argument anymore. Weird how Rus didn’t push back on it more.
The scientific argument is the killer argument, and it is where the whole doomerism bs falls short
Great one this week Russ. Listening to Marc is always so insightful. His analogy to the AI fear mongering being like a cult is spot on (thinking of Sam Harris here).
720 guys come on, should be 4k
Sound is also quite scratchy.
Why would AI do anything we want? Do you listen to what ants tell you?
Why are you scared that AI will act how he wants?
Great video.
Imaging having to move house and move all those books, 99% of which will never be opened again 🤡
It is completely naiive to think that this technology won't be use by the rich and powerful to control the poor
It's naive to think that technology won't be used by the poor to control the rich.
This is the 7th episode on the topic of artificial intelligence on EconTalk podcast this year.
White Shirley Thomas Angela Walker Anthony
Thank you
The dismissals performed here are astounding, and not in a good way.
Yeah, take what a VC says about AI risk with a grain of salt. They have every incentive to be behind the ball with this technology as it is existential to their firms existence if they miss it.
Good show 🎉
Andreessen is an ultra-elite intellectual NPC. Def. on opposing team.
His unique anthropologically. You are NPC like many others. Believer.
Have either of them heard of "alignment"?
Yes, and as we all know, it's just a BS term.
@@davidw8668 Not sure who "all" is in the comment, but seems to be carrying a LOT of weight. It clearly excludes anyone discussing ML safety with half a clue
@@pconyc boogeyman clues?
No one really loves vintage computers.
By throwing out the precautionary principle he’s talking his own book of business. The reason this principle has been invoked in modern times is because the blast radius of our technology has greatly expanded. There should be no hurry.
Please do a episode dedicated to the intersectionality of medical freedoms and human rights.
Russ Roberts uses chatGPT to write poems???? Seriously.
Can not fucking stand how Marc inhales after every word he says...
And when Marc is wrong and most of humanity is destroyed and the few left will be zoos, he'll still make a tidy profit.
Your point is?
@@davidw8668 As long as you're one of the victims I'm good with it.
@@DanHowardMtl victim of your fantasies? You are.
@@davidw8668 Try constructing proper sentences next time. I know it's hard but....
There are millions of you. How are you different from those who believe in Jesus Christ or an afterlife? All of you believers have one thing in common. Global incompetence.
T-t-t ah ah ah wah wah.
Music sounds better at home than live. This is just a fact if you have a nice sound system.
Ok I checked 3 times if I bumped my Apple Watch and sped up this podcast. Talking faster does not help comprehension or maybe it’s just me. 😵💫
a good way to listen to this is on a podcast app that has the feature to slow down the speech; I listened to this episode at 0.8 speed; it is still faster than average talk
Regarding the 35min mark on how some believe consciousness will emerge out of complex/unpredictable machines.. I think we've become seduced by the power of technology.. where we've applied the mechanistic/materialist approach so effectively , that we think we could describe all of reality this way ( i.e. humans are nothing more than sophisticated machines and that matter gives rise to consciousness).
BTW, I recommend how Naval Ravikant explains it on the Knowledge Project Podcast at the 1h mark (episode 18 or 171)