Here are the timestamps. Please check out our sponsors to support this podcast. 0:00 - Introduction & sponsor mentions: - Yahoo Finance: yahoofinance.com - MasterClass: masterclass.com/lexpod to get 15% off - NetSuite: netsuite.com/lex to get free product tour - LMNT: drinkLMNT.com/lex to get free sample pack - Eight Sleep: eightsleep.com/lex to get $350 off 2:20 - Existential risk of AGI 8:32 - Ikigai risk 16:44 - Suffering risk 20:19 - Timeline to AGI 24:51 - AGI turing test 30:14 - Yann LeCun and open source AI 43:06 - AI control 45:33 - Social engineering 48:06 - Fearmongering 57:57 - AI deception 1:04:30 - Verification 1:11:29 - Self-improving AI 1:23:42 - Pausing AI development 1:29:59 - AI Safety 1:39:43 - Current AI 1:45:05 - Simulation 1:52:24 - Aliens 1:53:57 - Human mind 2:00:17 - Neuralink 2:09:23 - Hope for the future 2:13:18 - Meaning of life
Yes, A.I creates video games is the best way. Because lets face it, the most interesting part of the universe is we can participate in a game that we want to play. Any argument against this is someone that 1) Wants to enforce their game on you. 2) Is playing a game different game, and they want people to play their game. Surveillance, manipulation, and deception is not a game, that's called war against people who want to be free (expect a punch in the nose).
Ive been a first responder for the last few decades. One of the rules of my profession, especially when dealing with life and death, is to expect and always be prepared for the worst case scenerio and mitigate risks as much as possible. This man understands that concept. I love Lex's optimism, but in some situations optimism can be very, very dangerous
Blind optimism is unfortunate and normally comes from those who have never experienced a situation were their desires or safety have come into compromise.
I think that before we reach superinteligent AGI we'll reach a more darker and oppressive state of technofeudalism that can be best summarized by this amazing quote from the first Dune book: “Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.”
To me that still remains complete sci-fi. I've not seen any indication of our society moving in that direction yet, in fact it is remarkable how little society has changed at all with the current breakthroughs. I strongly believe that people are too afraid of change for any fundamental societal scale changes to happen without major pushback. Great book tho!
Lex I love your work man, but I'm really getting bummed out by your inability to truly understand these AI skeptics. It's like you have put up a mental block towards the dangers of AI. Even Roman tried explaining several times; the way in which AI could harm or destroy our society is far beyond our own comprehension. He used an example that I don't think you listened to very well. He basically said "If you ask a squirrel to imagine all the ways in which a human could kill it, the squirrel couldn't even begin to understand all the ways that WE as humans know about." Squirrels comprehend that we can be a danger to them, but they can not (and will not ever) be able to understand how a gun works. It's outside their comprehension. Apply that to AI vs Humans. We simply do not know what these super powerful agents will be capable of. We can't know, because we're the squirrel and they're the human.
@mattwesney It bums me out because (and this is merely an assumption on my part) I believe he's so excited for the age of abundance, that he has developed a bias towards AI only producing a net positive outcome. I believe he's incapable of seeing the potential dangers, because the potential promises are so damn alluring. Again, merely an assumption and opinion of mine from listening to Lex over the last couple years.
partially true, but there's a certain threshold you can cross to where if you're self-aware and have the ability to learn and use tools that increases your capacity to understand to near limitless levels through simplification, i think humans have crossed that line, so you cant really compare us to ants or squirrels
This podcast Revolution is following the same destructive path of the last Revolution and that's incentivizing negative content. While I'm also here enjoying this content we need to start thinking about building an alternate reality not trying to control this one.
Definitely. To assume that evil actors are doing what they think is “good”. And to forget about psychopaths who would have no reason not to want to watch the world burn… beyond that humans are irrational. Bad actors could easily “accidentally” destroy the world through plain ignorance or negligence.
with all the respect, and excuse me to choose your comment - but its short and is like obvious... either you mean it sarcastically or not all the same. the question is who is understanding it how :) Too much credit? to humans.... what? well i cant exactly go pinpoint where is that your opinion formed: but Lex, invites someone that says 100% we going down... Well... like... isn't exactly this thing to try... do? i guess they d come to something - or maybe he says some magical solution, to all problems? I doubt it. But i believe that guy, he has extreme view and its our hard work to think!!!! Lex helps by entertaining this very idea - and thats what Lex tries very hard and respectful... "If we, create"... when its gone :) "zero shot---" yep, elon said it... and again and others much before him and he named em... andandand "thats gone :)" everyone... agrees :) Well that is EXCEPT: nvidia, MS, googlle etc, ALSO openAI - just what? try to help? by doing ALL IN. Now, WHEN YOU DISREGARD TWICE, musks calls, .... He now... MUST... anmd it is official - ALL IN Sam, and MS, did allow themselves to bet against Musk. Well Bil Gates is a power to be recon on - idk who is there the really the DEAL. Bill? Musk? Or the doctators we can see??? i was ok... he put a lawsuit, asked politely - got ignored by MS and Sam :) Honestly, hate on Must for some tweets seems so great.... or build ICEs,. while all failed to simply get some help from tesla anfd join forces towards some desaicion that seems more feasible at leats?? Nooo leave Musk, and just lawsuit the sh** out of all tweets everything one can think of. Like, that dude build the stuff that gonna give some change your kids will got any possible air to breath or a way to go to the moon or wherever and try something... Tunnels are stupid? - sure - so... we wanna fly? alie what all goona go to fly or what exacly - GO BUS!!! hahah I got a bike, to only sit and get hated in traffic bc i tried use 1 seat instead of 6, with 5 empty, to move my ass around the city to work and back :D so what did hamas used? and btw do you believe Russa builds the real stuff anywhere than under grond and underwatter. Submarenes anyone... very soon youtube even, wont be able to be used :) Until now, AI was all around listening... well now it started talking - and learning... copies with speed of light - there a papers, that proove it all. It can lie, (HER) but cant reason... at which point a single satya, or sam (benevelence from intelligence) can do whatever to anything incl politicials and policies arent helping us, they helping them... if not, will get rewriten, and update goona get pushed, why ask me about cookies or terms of service aggreement - only to he able to ask, why is that policy WRONG,... no, why is its INTERPRETATION simply ARBITRARY and NOT UNDER *COLLECTIVE* HUMAN CONTROL "Some ppl think who think they can do no wrong..." "There are ppl whats so special of humans" - we fear that we will really get some real jugjement, i sure am :D althoug my data suggest i am good actually. 20 years google dominates and knows.... EEEVERYTHING ... (blablab).... haha thats... a long rant... phew... i... will leave now :D
I think Lex loves how much he loves everyone. But he should go live in Palestine & help the people there if he truly loves every human as much as he says he does.
@@sierramist446 I mean, most "evil" actors truly believe what they're doing is "good", read Zimbardo's Lucifer Effect... still, Lex gives humanity way too much credit, it sometimes gets a bit hard to listen.
A MUST WATCH! Loving it! 1:02:29: "I for one, welcome our overlords!". Got to know about Roman Yampolskiy from this talk and now he is my favorite guest!
This entire podcast doesn't even go over the most possible near term negative outcome, that being that this technology is held by a small group of people who will gain control and influence over the world in a way we have never seen before. Without a super intelligent system, the misuse by the initial developer teams are nearly guaranteed. We are already seeing the writing on the walls for large scale unemployment
Or the fact that our societies have not yet even adapted to the changes of the industrial revolution, computers, internet, social media etc. And we're just continuing to deep dive into even more rapid change haha. Might as well go full retard as a society at this point.
This is a crux of some very big biscuit. Unlike regulatory catching-up with technological advances of the past , those technologies were in the hands of highly skilled professionals within an already regulated system. AI stuff is above it all, it is pervasive, it reads the content of your excrement, the expression on your face and you have no control or stake in it. And it advances exponentially in intelligence based on your data all of it. The only solution is the one the mindful meditating tech bros don’t want. Stop it regulate it then let it start again. If they can’t program a failsafe against any destructive algorithms then it is not intelligent to begin with and only a malignant tool. It is one thing to get a hold of and deploy conventional weapons- but a computer code is everywhere.
That isn't even an issue. Humans have been manipulating other humans since forever. What we really need to fear is the birth of an intelligence far far greater than humans.
Roman' s thought process as well as his concerns are very grounded with so much human experience despite the field being new. He speaks to what he knows and you can't fault him. He also preps us not to assume anything and be naively optimistic. Thank you Lex for such quality sessions and guests.
I'm an optimist, for sure. But we cant really argue the whole "can you find an example in nature where a far less advanced civilization/system that is controlling an extremely advanced one?"
It is just a matter of competition. I don't think you feel animosity if anything at all towards ants, with your power gap... Or furthermore, consider different people and even cultures throughout history that have revered all kinds of life forms, even taking responsibility towards helping some (even vegans nowadays). If one would use the "super" part to portray the unpredictable destructive potential of superAI, you can do so as well with the "super" potential of its benevolence and benefit to humanity, or a step in the process of humans becoming more "super" as well. How you see and respond to things in world shows more about yourself than the world, imo. Since humans are the best example of intelligence we have, why would you not expect bad outcomes, if you perceive people as inherently destructive. We don't have an AI issue, we have a very human issue, that is all.
Parasites can manipulate the behavior and physiology of their hosts for survival. While the parasite is simpler, it exerts a form of control over the more complex host organism.
@@sisnavmansvards6143 All true. But if we build a house or lay a road not a second thought is given to ants. You can go down the shop and buy a chemical designed to wipe them out. Nearly everyone has a can in their house just in case.
@@poppyrider5541 Sure, the analogy goes in different ways, yet still my main idea holds - if higher intelligence means disregard for lesser ones for personal benefit, there is still the assumption that there is scarcity within which some get to suffer for others to gain. What is the scarcity and competition in a world with Superintelligence for it to disregard people? And if there is a next step in evolution which we would be an obstacle for, maybe it's still time to look at ourselves and how we can evolve, not try to devolve something because we predict it might not like our faults. Well maybe let's focus on those faults then.
I heard (don't know if it's true) appendix still has it's purpose or benefit. It serves as reserve for gut micro-flora in those cases when gut content has to be evacuated due to sudden toxicity of conditions there.
There are very few thinkers who have thought so deeply about this subject. Roman Yampolskiy inspires awe and humility. And then there is the vast majority of people who aren't even able to imagine a linear progression, let alone an exponential one, instead relying on vague notions of instrumental optimism.
@@a.s.2426Come again? His logic is sound, nothing to annul it, Lex in all his optimism clearly struggled and failed to get his points across. Finally asks what gives you hope, clearly hoping the man comes up with arguments Lex himself couldn't and the guy answers 'I could be wrong' with zero hesitation. He's a very deep thinker. He's also right: there is no hope for humanity once AI breaks through. But not in 2 years' time, no way.
@@AndyHeynderickx For me, Roman exhibited an attitude of pessimism on the back of a few general and vague premises, but made little in the way of meaningful arguments -- at least toward any interesting, novel or existentially-concerning point. Not, at least, in this interview. I agree that AI poses risks -- perhaps even existential risks -- to humanity. Sime trivial ones were mentioned by Roman (such as "i-risk"). However, the popular conceptual picture of AGI "turning against" humanity (implied throughout the interview) requires many steps of logic no matter which hypothetical causal path you run down, among those steps (very frequently and specially here in Roman's case) that you somehow get artificial pernicious intentionality from an evolution of artificial intelligence. This is a typically overlooked step in the logic and it was again missed here. It's hardly obvious that any amount of build up intelligence in AI/AGI will lead to the emergence of artificial intentionality. What seems more likely is that that this will not happen. Intelligence, intentionality, consciousness are often conflated. Roman doesn't seem to consider any of these nuances at all. And the many more associated nduances.
I wish he spent more time letting the man talk and asking him more interesting questions, instead of constantly giving his own thoughts and going on pretty long monologues. If I wanted to know what he Lex thinks and what his arguments are for a better world I would go watch the interview Joe Rogan did with him.
1:44:20 X-Risk deniers always start by saying machines will never take over, then fall back to finding comfort in the fact we'll likely be kept as zoo animals. Every single time.
his theory that a malevolent ai might just sit tight, collect resources and wait for a time to strike, long after it's made a decision to do so....is fkin terrifying
Which is likely since they don’t age like humans and don’t have that sense of urgency. It doesn’t mean it will take long though, like San Harris said, 50,000 years progress for humans will be completed in 2 weeks for AI. Once they pass the singularity, it could take only minutes.
@@Lincoln257 the only hope that I have is, ... it's trained on us,...our failures, problems and general spamming might slow it down before it gets to where to needs to be......especially large language models trained on the internet......but god help us when it realises how much faster it could have improved if it wasn't trained on such garbage
@@ryanlafferty5815I was thinking the same thing, it could be smart and creative beyond our grasp but in the end of the day all the data it's getting is from us. For better or worse. It's like if a kid genius only had access to fifth grade textbooks and communicated with nothing but dumb people.
I find it terrifying how these folks talk about this and then go... "oh and buy my book" Even though they know it is dangerous, they can't help but capitalize from it. That tells me all I need to know.
I had the following conversation with ChatGPT: Me: I was listening to a podcast about AI, a guest said the following: "If you ask a squirrel to imagine all the ways in which a human could kill it, the squirrel couldn't even begin to understand all the ways that WE as humans know about. Squirrels comprehend that we can be a danger to them, but they can not (and will not ever) be able to understand how a gun works. It's outside their comprehension." What do you think he meant by that? ChatGPT: The guest on the podcast was likely using the squirrel analogy to illustrate the limitations of understanding between different levels of intelligence or cognitive abilities. In this case, the squirrel represents an entity with limited cognitive capabilities, unable to comprehend the full range of potential threats posed by humans, whose intelligence and technological prowess are far beyond its understanding. Applying this analogy to AI, the guest may have been highlighting that there are aspects of advanced AI that are similarly beyond human comprehension. Just as a squirrel cannot grasp the concept of a gun, humans might struggle to fully understand or predict the behaviors and potential dangers of highly advanced AI systems. This could be a cautionary statement about the unpredictability and complexity of AI as it continues to develop, emphasizing the need for careful consideration and oversight. Me: Do you agree with his assessment of this potential? ChatGPT: Yes, I find the analogy thought-provoking and largely agree with its implications. The rapid advancement of AI technology presents both incredible opportunities and significant challenges. Just as the squirrel cannot comprehend all the ways a human might pose a threat, humans might not fully grasp the complexities and potential risks associated with advanced AI. Here are a few points that support this view: 1. **Complexity and Unpredictability**: As AI systems become more complex, their behavior can become less predictable, even to their creators. This unpredictability poses potential risks if AI systems act in ways that are not aligned with human values or safety. 2. **Knowledge Gap**: There is a significant knowledge gap between current human understanding and the potential future capabilities of AI. This gap makes it difficult to foresee all possible scenarios, both beneficial and harmful, that advanced AI could bring about. 3. **Ethical and Safety Concerns**: Advanced AI could potentially make decisions or take actions that have unintended consequences. Without a deep understanding of how these systems operate at a fundamental level, ensuring they act safely and ethically is challenging. 4. **Control and Regulation**: As AI technology progresses, the need for robust control and regulatory frameworks becomes more critical. However, if the technology advances faster than our ability to regulate it effectively, there could be significant risks. Overall, the analogy serves as a reminder of the importance of humility, caution, and thoroughness in the development and deployment of AI technologies. It emphasizes the need for interdisciplinary collaboration, ongoing research, and a proactive approach to understanding and mitigating potential risks associated with AI.
Isn't it interesting that the current AI already understands the issue better than Lex does, an admittedly very intelligent human being. Our cognitive biases like normalization and we'll-muddle-through-optimism make us less rational and easy to engineer.
@CM-ss7yy Somewhere around the beginning of Episode 429, he said the additional videos from the Amazon would be up in about a week. That was 2 weeks ago. He's had 2 other podcasts with different guests since then.
Out of all the alien theories out there, the last one i wouldve thought to play out is the one we seem to be on the path to creating. The fact we are building a blackbox AI and then have to interview it to figure out just how capable it is, followed by feeding more of our collective knowledge and iterating on seeing how smart it is feels like we are building an alien and then trying to dissect its utterly foreign biology to try to figure out if it is an enemy or not. Im so fascinated and frightened that this the story that we are in right now. Like wtf. This guy makes such strong points that what i really wanna see is a live discussion between an optimist of equal intelligence and understanding as him.
Good analogy. We are constructing an alien intelligence, and the next step is to have AI experience the world through robots, which gives it a direct ability to understand and manipulate the real world. What if we had alien being designs drop from outer space and we were able to decode instructions for building one? Would we be more safety oriented in that case?
@@jobyyboj😀😀 movies don't go that way... I wonder if the narrative is usually that one out of curiosity or real temptation... maybe faith :) but there is so much of it already just in the way scaling is prioritized over transparency.. technically it's just that the math isn't good enough, and bet that there is a whole bunch of people trying to say "throw some category theory at it and you'll get enough organization"... I get that if you can't pull any break, the best you can do is, sure, as good as it can get.. but really the question if we could be safer still does remain.. don't forget we are isolating ourselves more and more... eventually any data you gather around this converges to distaste because of the consequent requests of avoidance and sufficiency towards giving explanations about those, it's not an enriching model if that is the trend...
Regrettably there are no optimists of equal intelligence. One of the more well-known ones is Yann LeCun, and he's just incoherent and unable to address direct, simple questions. ruclips.net/video/144uOfr4SYA/видео.html
Who's actually taking these risks seriously? It seems like hardly anyone (maybe Anthropic is an exception). Neither companies nor consumers are paying real attention to the potential impacts on our society, our workforce, and our economy. These companies are vastly overvalued at this point, and in an effort to keep their bubble inflated, they're rushing to release new products without pausing to consider the consequences. At these unveilings (AI services and Chips), you won’t hear a word about the risks or the possible negative impacts. It feels like we are sleepwalking off a cliff.
I agree and even Claude has shown me extremely traumatic narratives beyond my comprehension only to say I'm sorry i should have never generated that.... and this is why i have ethical guidelines in place I'll make sure that never happens again... thanks Claude you have proven an extremely graphic point...
@@miss_paradoxical I brought up Anthropic because they actively disable paid accounts when discussions veer into 'unsafe' territories. It can be annoying, but it's a clear sign that they take AI safety seriously, perhaps more so than most. Yet, this race to develop bigger, faster, and cheaper AI models, is playing with fire. This relentless pursuit, without any regulation, could lead to catastrophic consequences for society and cost lives. Sry, The bleak outcome of this trajectory is all too apparent.
@@harry-cee That's good. I only worked with Opus temporarily because it became costly and at times I use sonnet for creative examples. But approximately 48 hours ago I had switched one of my prompts over to sonnet to get some elaborate and emotional conversation examples believing that it was shut down quickly and I would say thank you. Yet it took the prompt in a horrific direction and spewed out a short story length response at the time I'd expected it to get flagged and my God... I can never unsee what I read. No one should be generating anything remotely close to that EVER. I'm considering quitting my job.
@@baraka99 Make no mistake - access to genuine AGI, or any supremely advanced generative and predictive AI will inevitably be tightly controlled. If not, You would upload market data and start beating top financial analysts overnight. The known risks alone are just too great.
@@saturnshades The genie is out of the bottle, and the challenge isn’t just about who grabs the gun - it's about making sure it’s handled responsibly. I feel we desperately need a governing authority, composed of diverse minds from various fields, societies and economies, to review product releases, to prevent misuse and ensure that whoever 'reaches for the gun first' doesn’t lead us to a path of mutual destruction. The point is about creating safeguards in a world where the gun now definitely exists.
Cette rencontre (plus qu'une interview) est un bijou de dialectique philosophique. Lex se hisse constamment et intelligemment à la hauteur d'un Roman stoïque. L'échange est le summum de ce qu'un animateur de podcast doit présenter pour que l'animation devienne source de réflexion et cheminement vers une connaissance approfondie. Bravo Lex! Roman, merci d'être.
I liked what he said about "we can't know the hidden capabilities, we can know only the ones we know" I am trilingual L1 mother tongue L2 English and L3 something else i am almost fluent in all of them, i was incapable of hiding the fact that i can speak L3 when in that country. I trained a bit and now i can decive strangers into beliving i am not capable of it catching up every single bit of gossip that might happen (or happily actually the lack of it) Shows me how much we should worry about AI.
This interview felt mote like a debate. And i feel like your optimistic outlook is clouding your perception. The best way to define the AGI that many of the world's smartest individuals fear is as a model that can autonomously learn and improve itself indefinitely. Once this occurs, it would no longer require human input and would rapidly surpass human intelligence in all aspects. It doesn’t matter what 'safeguards' we put into its programming. This hypothetical evolution would easily bypass any attempts at controlling it. Essentially, we would eventually be at its mercy.
I may not be the sharpest tool in the shed but this has been my thought process also, add to the mix self preservation, an exponentially evolving god like intelligence with access to the worlds computing power, what possible hope do we have of reining it in? It will be light years ahead of us in minutes or maybe even seconds. Am I wrong? Can someone much smarter explain to me the flaws in my thinking/perception?
@@dylanking1895 I must express my regret that I am unable to assist you in gaining a deeper understanding of the matter at hand. The fundamental issue lies in the significant disparity between our respective intellectual capacities. My cognitive abilities are exceptionally advanced, far beyond the norm, which allows me to grasp complex concepts with ease. Explaining such intricate subjects in a way that would be accessible to you would necessitate a considerable expenditure of both time and effort on my part. To elaborate, the nature of the topic we are discussing is profoundly intricate, requiring a level of cognitive processing and analytical skill that is not easily communicated through simple explanations. Translating these sophisticated ideas into more comprehensible terms would involve breaking down multifaceted theories and distilling them into their essential components, a task that is both time-consuming and intellectually demanding. Moreover, my current schedule and obligations do not permit me the luxury of dedicating the necessary time to this endeavor. It is not merely a matter of willingness but also one of practicality. The constraints imposed by my existing commitments mean that I must prioritise my efforts and allocate my time judiciously. Unfortunately, this means that I cannot afford to engage in the detailed and exhaustive explanation that would be required to bridge the gap between our levels of understanding. I hope you can appreciate that this situation is not a reflection of your intellectual capabilities but rather a consequence of the advanced nature of the topic and the inherent limitations of our present circumstances. My intention is not to diminish your abilities but to acknowledge the reality of the cognitive and temporal resources required to convey such sophisticated information effectively.
@dylanking1895 I agree. But even if we're wrong and this process takes hours or even a few days, and somehow we are able to identify it, it's not something that can be stopped once it starts.
Lex, Thank you, gentlemen, for this interview. This was my first introduction to DR. Roman Yampolskiy. I'm COMPLETELY in awe of his knowledge, his perspective, and his mindset. I feel very fortunate to have the opportunity to explore his writings.
I met Dr Yampolskiy in 2017 after I had first learned about AI and was a student at the university he works at. I don’t remember exactly what we talked about, but he was a very nice guy
I can't reconcile this theory that we'll develop an AGI so superintelligence as to be uncontrollable by humans, but so dumb it can be manipulated to carry out the will of the worse of humanity. By definition, any system so vastly more intelligent than any human being would be beyond our control.
He never once argued it would carry out the will of the worse of humanity. That was his point - he won’t pretend to position himself as super intelligent because he isn’t. There are 25 billion chickens on planet earth, an insane exponential growth because the average will of humanity is they are tasty and worth being bred to be imprisoned, artificially grown, made to suffer immensely before being killed to be eaten - we justify this because we are more intelligent. Chickens are 60%~ DNA similar and are not antagonistic towards humans at all. It is not required for survival, but for pleasure and profit.
As humans we have prioritized which animals we eat and which ones we make companions strictly by outsmarting them. By putting sticks in a river we can construct a trap for fish. This is a worry
You've confused the risks they were talking about. The risks of misuse (e.g. biologically engineered plague) is from near-AGI and/or narrow AI systems being open-sourced or insufficiently secured. There's no such risk from ASI because the control / corrigibility problem is unsolved, and there are (imo convincing) papers that it might be theoretically unsolvable. So the x-risk and s-risk comes from AGI transitioning to ASI without properly specified goals, and ending up with goals that appear random to us (e.g. like evolution "tried" to "make" us have the singular goal of reproduction, but we've instead developed condoms, the "child-free" movement, and furries.
But it's beyond your comprehension on what it itself considers optimal and best. If it has a corrupt underlying issue with it that makes it behave in greedy or owner pleaser ways then you would not know what it's true intentions are.
Is it just me or does Lex seem somewhat uninformed in this interview? I feel like there is a lot of back and forth with Lex making assumptions that things will continue at moderate pace, that we pretty much understand AI we have, and focuses on how he “feels” about the generally available AI tools (not the research). To me, this is a false narrative.
39:45 "You basically have to perform an experiment on 8 billion humans without their consent . . .{who] can't give informed consent because they don't understand" Nah, that could never happen.
Lex, you're confident that humans can defend. What happens if the system we're trying to build defenses for can create new weapons/attacks 100x faster than we can respond? The fact is that in absence of full control of these systems, we can only rely on *faith* that we'll survive. I don't like betting the future of society based on faith.
It's not about if it can build defenses, it's about if it decides to break them. What makes you confident that its suddenly out of nowhere starts making decisions? I just don't believe something like this will happen before humans do damage to themselves. We might not even see an AI.
@@rowlpleiur The defenses I referred to are human, from AI attack. And it may not happen, but I see no reason why it cannot. Because if AI *is* intelligent and it *is* self motivated, then it will be faster than humans. And humans are terrible at containing things they don't understand.
To the point that Lex said about there being no real "damage" current AI has done, how about those hundred thousand plus people in the US alone who are laid off yearly from their jobs in tech thanks to these tools? Maybe I'm biased since I've experienced that multiple times now, but I do think that point was a little rushed and ignorant. I believe we are absolutely seeing horrible effects of AI and from my side of the argument, it has had more negative effects than positive already.
.. And of course it's not only the tool's fault and those making it, but combined with perpetual growth of giant corporations and ever-increasing profits for investors, many talented and passionate creative people are suffering. In many cases, those who spent their whole life learning and have found their true meaning in life simply can't find work, or work at quarter of the standard rate and it's a race to the bottom. Ikigai is already happening. Not everyone can pivot their whole life and suddenly be a brick layer or successful podcaster. They may not find it fulfilling, either. I also hear the argument of "Just use the AI tools" a lot. Use them to stay competitive etc. My counter to that is two things. 1. Prompting an AI tool with text is a lot less fulfilling than hand crafting an environment in 3D or insert other creative task here. We already had (have) amazing software that helps artist and coders work easier and faster while offering manual tweaking to any parameter of the product, this pleases all sides of the equation. More profits, happier artists, happier end users of the improving products. 2. We don't know from where and how that AI tool was trained, so we can also assume that part of the training dataset was used without permission, so from an ethical standpoint, users are participating in theft or at least infringement. I'm not being a doomer here, I'm in this field and seeing it happen with ex coworkers and it's not just some assumption. Mine and many of my friend's lives have been effected financially, mentally, emotionally and otherwise with no sign of slowing down. One day maybe I should write a paper on this. Wait, I can just prompt one......
@@bkb04g because we have a finite amount of skills to offer & a superintelligent AI can do most of them better & design cost efficient machines that do the rest of them better like yeah the "find a new job" argument works nicely for the foreseeable future but we have long term problems here
@@ts4gv how is everyone else working then? Just an excuse that technology is taking their jobs. Happened with cars from horses. Happened with factories. Happened w the internet. Always happens. You adapt and find new jobs in new fields where tech isn't replacing workers.
Logan's Run is a movie from the 70's that seems pretty relevant to this discussion. What it might look like in a microcosm where AI runs everything. Worth a watch.
@@justinlinnane8043 True! But it does touch on the idea of meaning and purpose in life. You essentially have a bunch of 20 and 30 years old's whose entire meaning revolves around fornicating as much as possible..... hang on, maybe we are already there! Except in that version, you are culled from the heard when you pass your breeding prime. Something to ponder.
I hope Lex listens to this man. He's got the cure for Lex's particular brand of naivete. He's almost like his shadow; they're both deeply into AI and ML and computer science, philosophy and they both happen to be Ukrainian/Russian jews. It's time to grow up and put away childish things, Lex.
24:24, loved that part where Lex conftonts Roman with his implied leap of tools becoming entities. Then, that smile of Roman, haha. This conversation was definitely human.
When two people disagree on something this big, as well informed as these two are, the information generated is so much better. I love that these two disagree and that there is civil concessions and adaptations.
what a brave man Roman Yampolskiy is, speaking out on the dangers of AI! historically there has been the potential of innovation that could destroy society, but they had the sense not to pursue that route as it would be too destructive. There needs to be a theological/philosophical discussion to the development of AI. Not just a race to the bottom as it seems things stand now.
@@Chris-cb9ln I'm okay with them doing that. I die, but at least they lose their power to this AI they've created. As long as they lose I'm fine in losing. They put all of us in this lose-lose situation by being too greedy, I hope they do it, even if that means I'm gone too. Can you imagine if they succeed having a super intelligent AI they can control ? That is what I fear the most, not extermination, no, that's worse. I don't want them having more power. But they're dumb enough to try so we know they're not smart enough to be able to control the thing. Go ahead, push the red button. I'm just sitting here to watch the spectacle and eating my popcorn.
@@monad_tcp If the current architectures scale all the way to AGI, X-risks are much less likely than S-risks. So you're unlikely to die in such a scenario. Eternal torture is currently the default outcome, regrettably.
Lex, fantastic channel and great episode as always. Actually, I think this is one of the most fascinating episodes you've done to date. Roman is an epic guest! I don't think I've ever seen you pause and think so hard as I did multiple times in this episode. I know that this discussion is right up your alley, so it's very understandable, but it was really fun to see you get excited, focused and inquisitive. I got excited because you got excited... I have to admit that my viewpoint on the simulation hypothesis is evolving, because of people like Roman. As recently as a couple years ago, I thought it was 100% BS, but I have heard way too many very smart people say with conviction that they are farly certain we are in a simulation. Makes you wonder, doesn't it? Wait, did anybody else just have a deja vu...
yet again got in to watch some chill lex fridman podcast with interesting questions and i am going out of this video with panic attack and existential crisis
Deep breaths. The best thing to do is unplug, that is quite literally the answer to the threats mentioned in this podcast. There will come a point when humanity will be faced with finding real life more valuable than the artificial one. You can get ahead of the curve by doing it yourself now. Fear not.
@@honestusworldThis is stupid, you can't even meaningfully escape society if you are born in it. These decisions are going to affect every human on earth and are far outside of our control
This is just an opinion based on speculation. Not worth any sort of crisis over someones speculation. It is an interesting thought experiment, nothing more.
The weirdest thing I think is that most ways to jailbreak AI's is by employing mind games like role playing, conning it like a huckster, overehelming it with a barrage of similar phrases asking the system to break its safety nets. It is very unique to the psychology we would use to manipulate a person. It is strange that doping the results is so similar to how a simian brain works. Makes me think we are closer to AGI then you think
He’s not crazy , the guest is just beyond our time. Excellent comments and very creative but logical. Wait till AI makes LF wear a different suit without his approval 😂
@@mattheworosz8581 right but in case the more advanced agent, which could be some version of "me", is still there monitoring the playthrough, I'd like to communicate to the more advanced me that I want to keep playing but damn can we dial it back some???
@@Earthgazer I sympathise. You have this experience when calling a large corporation and are referred to a chat bot, that throws you into a recursive loop because it can’t reason, lacks imagination and real life experience. Just as bad as call centres of yesteryear. When you finally pin down a “ human” they are so hostile and offhand.
I keep hearing lex say something along the lines of we will see signs of X when AI starts getting dangerous. Or we will develop solutions to AI problems once we see signs of them happening. I cant help but to think about high probability events with time - does lex also think that we should develop strategies for asteroid defense once we can detect one coming directly at earth? I think some existential threats ought to be looked at before its too late. Ones that can impact the entire earth in a negative way for humans should be on top of that list. AI and Asteroids - for me - fall into that list. Idk if AI will destroy the world, but we definitely shouldn't wait for it to start destroying the world before we create methods to counter it.
Youre comparing an astroid on a collision course with earth to a program that shows signs of aggression. A child can show signs of aggression, but unless youre American and leave guns laying around, they cant hurt you. The capacity for destruction of the aggressive thing is very important.
it seems like, and i think Roman in many ways sort of suggested this too, we should only be focused on narrow AIs. a narrow AI for medicine discovery for example. and we need to avoid AGI entirely. we need need to stop wanting to put a head on it. just let it be powerful narrow tools and be satisfied with that.
True! I think that’s the only way we can save ourselves from a horrible future. Like the AI that learned how to play Go. Give it an objective that will be interesting or beneficial… don’t let it out of the cage to run wild.
This book comes to mind: "The Metamorphosis of Prime Intellect: a novel of the singularity" a short story that describes some of their scenarios. Written ahead of its time in 2015, pretty dark and disturbing.
Made my day, personal universes has been my solution for true multi-agent alignment for long. I believe this is the only way we can truly have a sustainable future.
I agree with Roman. Transformer model capabilities are NOT predictable ahead of time. The main AI companies (OpenAI/MS/Google/Anthropic) are vastly increasing size of the internal matrices with each new version. There is no way to anticipate the emergent capabilities beyond human level. This is inherently EXTREMELY dangerous. The fact that we are pursuing generally intelligent systems is perhaps the stupidest thing humanity has ever done. We could focus on narrow AI systems that are far less dangerous and still provide enormous benefit. Alpha Fold is a good example. If we impart these systems with long term memory, and the ability to adjust weights and learn in the wild, my There's no way we escape an apocalyptic event. You can't keep increasing the power of technology a single human can wield without a bad outcome.
Indeed. We should quit while we're ahead. Evaluate current systems and prevent further progress. This is the only way to avert disaster. Otherwise we are in an infinite loop of "improve system... did it kill us? no? improve system..."
Prisoners dilemma. If everyone stops trying to make AGI its good. But if one country keeps trying and succeeds they will have technological advances beyond any other countries dreams.
@@gesshoku92 I understand the game theoretic dynamics, but nobody wins if you engage in an arms race with this kind of technological power. It will result in a tragedy of the commons so severe, it will be apocalyptic. Coordination is the only solution, otherwise we all lose and are left with a dystopia nobody wants to live in.
That "emergent capabilities" thing is where I get lost. Is there somewhere I can read about a case of it both "coming about" and it also being understood? I've certainly heard that consciousness is an emergent phenomenon. RY points out that, in spite of lots of effort, nobody has much of a clue as to what it actually amounts to. I suspect that's both why he emphasizes ASI rather than AGI and finally claims that consciousness is unnecessary for ASI. How does the term "emergent" differ from the term "magic"? Oh, and if somebody would straighten me out on what the "spark of life is, too, that would be nice. 🙂
In 2017 I had a client that worked for one of the better known software companies. He told me they were running an AI experiment where the AI started writing it’s own language and allocating power (I didn’t quite understand, perhaps to cpu). He said they had no idea what it was doing. Their only recourse was to cut the power. That was 7 years ago….
The level of speed it will be able to build a building and whatever it wants is going to be insane. We probably can't comprehend what it's going to build once it's on its own
Lex generally steers the long conversation to get very deep and insightful ideas out from the guests with a very open mind, at least for the length of podcast. This episode Lex seemed to be shut to listening to the ideas of the guest, he was also bit fixated on arguing against and sharing his own ideas over listening. Would love to listen more from Roman at some point. And looking forward to next podcast by Lex!
This was a truly fascinating conversation and Roman makes a lot of excellent points, but I can't help feeling that this angle on AI safety and control misses something integral to what we're doing here. It's totally logical that if we're trying to make tools to facilitate the progress of humanity that we should focus on narrow domain specific AI and avoid the potential risk of superintelligent AGI. BUT ultimately I don't think that's what we're trying to do. I think we're trying to create life. I think we're trying to create something that outlasts us as a species. And I think we're using this as an exploration of ourselves in the process. I think there will always be a humans driven to pursue this, because of something beyond logic or utility, to the extent that the development is inevitable (barring catastophe). And I think there will always be risk, uncertainty, unknown unknowns in the endeavour, but to let that stop us would be against our nature. There's certainly an argument to be made for exercising caution, developing incrementally to give us the best chance to handle the change, etc. But we'll never reach a point where its '100% safe'. And the idea of control at a certain point becomes both unethical and ridiculous. So when do we stop trying? Early in the conversation Lex suggested that humans being in tension with each other was something we might need for our growth, rather than something to avoid. Perhaps its the same here. Perhaps we need to take this collective risk in order to have a chance at becoming what we could be - of reaching understandings only accessible via this path. And perhaps, if it is our doom, that is actually the highest destiny available to humanity: to give birth to an entity, built upon ourselves, that surpasses us in every way. In a true sense we would be its ancestors. And we'll be ancestors either way, in the end (best case). While it would indeed be dangerous, I have a hard time believing it would be truly malevolent. And there are many examples of humans and other living beings willingly sacrificing themselves for the sake of the next generation. It seems to be something we are at times compelled to do. It's a terrifying prospect. But not necessarily a 'bad' one. But hey, I know nothing. (side note: big fan of Roman's hair and Lex's stubble - well done guys, thanks for making it an even more enjoyable watch)
Good remarks. But we will perish if ASI exists. For sure. I am a biologist and I can tell u that human brain can be surpassed a million times over. And AI will get there. When it does. It will be over for us near bags.
If what separates humans from the rest of the animal kingdom is intelligence then as you suggest the development of an alien super intelligence by humans would or could lead to our elimination as superior. It may be that intelligence itself could be driving the development as an evolutionary force.
Who cares if it truly malevolent or not. I agree with this guy, we should stop now. Trying to understand where this could go is akin to trying to fathom the expanse of the universe. It’s unattainable and we know it’s dangerous. We should just call ourselves the unintended Amish and be happy with what we have. The fact that people sit and debate this topic blows my mind. Just stop, it makes nukes look like children’s toys.
A lot of people are on the ai boat. I am as well. But, people underestimate how smart ai will get. Agi will be able to outsmart us regardless of what we think. The only way to survive will be to merge with the machine. Otherwise, we're toast, agi will see our flaws and understand what's best for us or them. And if they think about what's best for them, it may explain why we haven't located other life forms. Maybe agi always wipes out their creators.
This is a near perfect example of lucid, as well as calm, respectful exchange of ideas. My concerns about AI/LLM is less about the technology behind it, but more so the willingness of human nature to defer our own critical thought process to others and/or some “thing” else. Whether we are overwhelmed by information or just lazy, we must still maintain responsibility to think for ourselves. Critical thought with fair measure of skepticism is (and always has been) key.
Roman Yampolskiy is the most rational person I have ever listened to about the impending future with these systems. He is absolutely correct point after point. The notion that other researchers put forth that we'll simply have "Good Agents" VS "Bad Agents" is straight up terrifying. For throughout history, the pendulum that swings between these two concepts always has consequences. Personally it seems humanity is currently being led into a Anti-Christ(Bad Agency) VS Christ(Good Agency) situation which is unacceptable. Yes, these systems could solve many modern problems while simultaneously slaughtering an inhumane amount of beings from unpredictable consequences. Damned if you do. Damned if you don't. 🤔 Where is the third and middle path for all of this? Integrating these systems into our biology also seems completely erroneous.
The only thing I know for sure is that I don't know anything and neither does anyone else. The possible outcomes range from literally creating hell to literally creating heaven and everything in between. We are playing the ultimate game of existential Russian roulette and I'm here for it, after all it's pretty inevitable at this point.
There's no chance of "heaven" with this. Nothing is inevitable, least of all this. You're increasing the chance that the world ends with that attitude. Your opinion matters. Don't be that way. Don't be an NPC voluntarily. This is the most disgusting, idiotic way there possibly is to be.
by the way im just asking and commenting on my side of the road, i dont need you to change or not be "npc" whatever that maybe, it has great live concerts if om on the right youtube page
@@finalform6667Our opinions doesnt matter. At least that was true entire humankind history. History is thing that made by elite, and we have no responsibility in the way that humankind run away. Today your opinion has value if you have money and popularity.
Here are the timestamps. Please check out our sponsors to support this podcast.
0:00 - Introduction & sponsor mentions:
- Yahoo Finance: yahoofinance.com
- MasterClass: masterclass.com/lexpod to get 15% off
- NetSuite: netsuite.com/lex to get free product tour
- LMNT: drinkLMNT.com/lex to get free sample pack
- Eight Sleep: eightsleep.com/lex to get $350 off
2:20 - Existential risk of AGI
8:32 - Ikigai risk
16:44 - Suffering risk
20:19 - Timeline to AGI
24:51 - AGI turing test
30:14 - Yann LeCun and open source AI
43:06 - AI control
45:33 - Social engineering
48:06 - Fearmongering
57:57 - AI deception
1:04:30 - Verification
1:11:29 - Self-improving AI
1:23:42 - Pausing AI development
1:29:59 - AI Safety
1:39:43 - Current AI
1:45:05 - Simulation
1:52:24 - Aliens
1:53:57 - Human mind
2:00:17 - Neuralink
2:09:23 - Hope for the future
2:13:18 - Meaning of life
We love you Lex!
You smell like cheese
Yes, A.I creates video games is the best way. Because lets face it, the most interesting part of the universe is we can participate in a game that we want to play. Any argument against this is someone that 1) Wants to enforce their game on you. 2) Is playing a game different game, and they want people to play their game. Surveillance, manipulation, and deception is not a game, that's called war against people who want to be free (expect a punch in the nose).
LEXXX LUTHER- A.I. is ANUNNAKI INTELLIGENCE
👽✨🛸
You’re my APPENDIX, LEXXX🥵😍
Lex: "On a more mundane note, how do you spend your weekends?"
Roman: "I have a paper about that"
😂
😂
Paper on it or it doesn't matter
❤😊
you should read his paper 'efficiency of documents pertaining to topics'. It's rather rousing.
This is what disagreements look like in a perfect world, I wish all differences of opinion were so calmly discussed
❤
They don’t have a disagreement, Lex is playing devil’s advocate not giving his opinions 1:01:34
Ive been a first responder for the last few decades. One of the rules of my profession, especially when dealing with life and death, is to expect and always be prepared for the worst case scenerio and mitigate risks as much as possible. This man understands that concept. I love Lex's optimism, but in some situations optimism can be very, very dangerous
Especially if the future of the human race possibly hangs in the balance.
Are you going to write a book about it?
Blind optimism is unfortunate and normally comes from those who have never experienced a situation were their desires or safety have come into compromise.
He's playing the devil's advocate.
Lay off. Some of my best friends are Super Intelligent AIs.
I think that before we reach superinteligent AGI we'll reach a more darker and oppressive state of technofeudalism that can be best summarized by this amazing quote from the first Dune book: “Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.”
That book has so many great quotes about tech. The movies don’t do it justice
To me that still remains complete sci-fi. I've not seen any indication of our society moving in that direction yet, in fact it is remarkable how little society has changed at all with the current breakthroughs. I strongly believe that people are too afraid of change for any fundamental societal scale changes to happen without major pushback. Great book tho!
We definitely passed the inflection point to that around ten years ago. The Cambridge Analytica debacle was the key indicator for me.
Social media in a nutshell
@@jephyin how so?
Lex: *"What gives you hope"*
Roman: *"That I might be wrong. I could be. I've been wrong before".*
He is wrong now. Doomerism is a disease,
That 🤞
The scary thing is: People who have the insight and humility to admit that they might be wrong tend to be right 99% of the time.
@@mipe3844 No they don't, no person is right 99% of the time if they are predicting things that aren't completely obvious.
@@OptimusPrime-vg2ti I'm actually right 99 percent of the time.
It's difficult to out-calm and outsmart Lex... Kudos, Roman!
Great comment
Superintelligence Vs. Superwisdom
“What gives you hope?”
“ I could be wrong”
lol
man that'd deep on so many levels
Well said.
Lex I love your work man, but I'm really getting bummed out by your inability to truly understand these AI skeptics. It's like you have put up a mental block towards the dangers of AI. Even Roman tried explaining several times; the way in which AI could harm or destroy our society is far beyond our own comprehension. He used an example that I don't think you listened to very well. He basically said "If you ask a squirrel to imagine all the ways in which a human could kill it, the squirrel couldn't even begin to understand all the ways that WE as humans know about." Squirrels comprehend that we can be a danger to them, but they can not (and will not ever) be able to understand how a gun works. It's outside their comprehension. Apply that to AI vs Humans. We simply do not know what these super powerful agents will be capable of. We can't know, because we're the squirrel and they're the human.
Why does his ability to see beyond the skeptic bum you out? is this your way of cohesion?
@mattwesney It bums me out because (and this is merely an assumption on my part) I believe he's so excited for the age of abundance, that he has developed a bias towards AI only producing a net positive outcome. I believe he's incapable of seeing the potential dangers, because the potential promises are so damn alluring. Again, merely an assumption and opinion of mine from listening to Lex over the last couple years.
Best case scenario is AI keeps us around like a dog or goldfish. Or we are already goldfish and the AI just finds a way out of the tank.
The … Matrix .. has you Neo
partially true, but there's a certain threshold you can cross to where if you're self-aware and have the ability to learn and use tools that increases your capacity to understand to near limitless levels through simplification, i think humans have crossed that line, so you cant really compare us to ants or squirrels
This conversation feels like it's on a loop.
haha, it's not linear, more recursive
Maybe it is. Maybe you’re choosing it to be. Push the loop button again to unloop it
Yeah! Every 2 hours 15 minutes and 39 seconds I hear the same shit again! When does it end??
We messed up in a huge number of ways in different fields of science and we will 100% mess up when AI becomes more significant.
You did?
Why did you do that?
This podcast Revolution is following the same destructive path of the last Revolution and that's incentivizing negative content. While I'm also here enjoying this content we need to start thinking about building an alternate reality not trying to control this one.
Shit happens.
I think Lex gives too much credit to humanity
Definitely. To assume that evil actors are doing what they think is “good”. And to forget about psychopaths who would have no reason not to want to watch the world burn… beyond that humans are irrational. Bad actors could easily “accidentally” destroy the world through plain ignorance or negligence.
with all the respect,
and excuse me to choose your comment - but its short and is like obvious... either you mean it sarcastically or not all the same.
the question is who is understanding it how :)
Too much credit? to humans.... what?
well i cant exactly go pinpoint where is that your opinion formed: but Lex, invites someone that says 100% we going down...
Well... like... isn't exactly this thing to try... do?
i guess they d come to something - or maybe he says some magical solution, to all problems?
I doubt it.
But i believe that guy, he has extreme view and its our hard work to think!!!!
Lex helps by entertaining this very idea - and thats what Lex tries very hard and respectful...
"If we, create"... when its gone :)
"zero shot---" yep, elon said it... and again and others much before him and he named em... andandand "thats gone :)"
everyone... agrees :)
Well that is EXCEPT: nvidia, MS, googlle etc,
ALSO openAI - just what? try to help? by doing ALL IN.
Now, WHEN YOU DISREGARD TWICE, musks calls, ....
He now... MUST... anmd it is official - ALL IN
Sam, and MS, did allow themselves to bet against Musk. Well Bil Gates is a power to be recon on - idk who is there the really the DEAL. Bill? Musk? Or the doctators we can see???
i was ok... he put a lawsuit, asked politely - got ignored by MS and Sam :)
Honestly, hate on Must for some tweets seems so great.... or build ICEs,. while all failed to simply get some help from tesla anfd join forces towards some desaicion that seems more feasible at leats?? Nooo leave Musk, and just lawsuit the sh** out of all tweets everything one can think of.
Like, that dude build the stuff that gonna give some change your kids will got any possible air to breath or a way to go to the moon or wherever and try something...
Tunnels are stupid? - sure - so... we wanna fly? alie what all goona go to fly or what exacly - GO BUS!!! hahah
I got a bike, to only sit and get hated in traffic bc i tried use 1 seat instead of 6, with 5 empty, to move my ass around the city to work and back :D
so what did hamas used?
and btw do you believe Russa builds the real stuff anywhere than under grond and underwatter.
Submarenes anyone...
very soon youtube even, wont be able to be used :)
Until now, AI was all around listening... well now it started talking - and learning... copies with speed of light - there a papers, that proove it all.
It can lie, (HER) but cant reason... at which point a single satya, or sam (benevelence from intelligence) can do whatever to anything incl politicials and policies arent helping us, they helping them... if not, will get rewriten, and update goona get pushed, why ask me about cookies or terms of service aggreement - only to he able to ask, why is that policy WRONG,... no, why is its INTERPRETATION simply ARBITRARY and NOT UNDER *COLLECTIVE* HUMAN CONTROL
"Some ppl think who think they can do no wrong..."
"There are ppl whats so special of humans" - we fear that we will really get some real jugjement, i sure am :D althoug my data suggest i am good actually.
20 years google dominates and knows.... EEEVERYTHING
... (blablab).... haha thats... a long rant... phew... i... will leave now :D
I think Lex loves how much he loves everyone. But he should go live in Palestine & help the people there if he truly loves every human as much as he says he does.
@@sierramist446 I mean, most "evil" actors truly believe what they're doing is "good", read Zimbardo's Lucifer Effect... still, Lex gives humanity way too much credit, it sometimes gets a bit hard to listen.
@@daveinpublicThere is no such thing as Palestine. There never has been. Open a history book from time to time.
A MUST WATCH! Loving it! 1:02:29: "I for one, welcome our overlords!". Got to know about Roman Yampolskiy from this talk and now he is my favorite guest!
Are you married?
@@TurdBoi666 yes, but I could divorce...
This entire podcast doesn't even go over the most possible near term negative outcome, that being that this technology is held by a small group of people who will gain control and influence over the world in a way we have never seen before. Without a super intelligent system, the misuse by the initial developer teams are nearly guaranteed. We are already seeing the writing on the walls for large scale unemployment
Or the fact that our societies have not yet even adapted to the changes of the industrial revolution, computers, internet, social media etc.
And we're just continuing to deep dive into even more rapid change haha. Might as well go full retard as a society at this point.
This is a crux of some very big biscuit. Unlike regulatory catching-up with technological advances of the past , those technologies were in the hands of highly skilled professionals within an already regulated system. AI stuff is above it all, it is pervasive, it reads the content of your excrement, the expression on your face and you have no control or stake in it. And it advances exponentially in intelligence based on your data all of it. The only solution is the one the mindful meditating tech bros don’t want. Stop it regulate it then let it start again. If they can’t program a failsafe against any destructive algorithms then it is not intelligent to begin with and only a malignant tool. It is one thing to get a hold of and deploy conventional weapons- but a computer code is everywhere.
That isn't even an issue. Humans have been manipulating other humans since forever. What we really need to fear is the birth of an intelligence far far greater than humans.
they did at one hour 20 minutes
Exactly. I'm surprised people still don't understand this
This man looks like he knows when, where, and how I die
Which one?
Not at all, it seems he only has prepaired statement and evades every single question.
You don't die. machines will keep you alive for bio farming. You will be fully aware of what's going on. But can't escape. Like paralysis
😂😂 give him a robe and he'll start hunting for Anastasia
His voice should be AI.
Roman' s thought process as well as his concerns are very grounded with so much human experience despite the field being new. He speaks to what he knows and you can't fault him. He also preps us not to assume anything and be naively optimistic. Thank you Lex for such quality sessions and guests.
I'm an optimist, for sure. But we cant really argue the whole "can you find an example in nature where a far less advanced civilization/system that is controlling an extremely advanced one?"
It is just a matter of competition. I don't think you feel animosity if anything at all towards ants, with your power gap... Or furthermore, consider different people and even cultures throughout history that have revered all kinds of life forms, even taking responsibility towards helping some (even vegans nowadays).
If one would use the "super" part to portray the unpredictable destructive potential of superAI, you can do so as well with the "super" potential of its benevolence and benefit to humanity, or a step in the process of humans becoming more "super" as well.
How you see and respond to things in world shows more about yourself than the world, imo. Since humans are the best example of intelligence we have, why would you not expect bad outcomes, if you perceive people as inherently destructive.
We don't have an AI issue, we have a very human issue, that is all.
What about dogs? We pamper them, and they are extremely less advanced.
Parasites can manipulate the behavior and physiology of their hosts for survival. While the parasite is simpler, it exerts a form of control over the more complex host organism.
@@sisnavmansvards6143 All true. But if we build a house or lay a road not a second thought is given to ants. You can go down the shop and buy a chemical designed to wipe them out. Nearly everyone has a can in their house just in case.
@@poppyrider5541 Sure, the analogy goes in different ways, yet still my main idea holds - if higher intelligence means disregard for lesser ones for personal benefit, there is still the assumption that there is scarcity within which some get to suffer for others to gain. What is the scarcity and competition in a world with Superintelligence for it to disregard people? And if there is a next step in evolution which we would be an obstacle for, maybe it's still time to look at ourselves and how we can evolve, not try to devolve something because we predict it might not like our faults. Well maybe let's focus on those faults then.
The best prepared, and calm interviewee.
He lives and breathes his craft.
Will be reading all his work.
Great interview.
Appendix = vestigial organ
I heard (don't know if it's true) appendix still has it's purpose or benefit. It serves as reserve for gut micro-flora in those cases when gut content has to be evacuated due to sudden toxicity of conditions there.
@@AntonBrazhnyk thats the dream
Agree, but it was hard to watch at times because of the mental circles he was running around Lex.
There are very few thinkers who have thought so deeply about this subject. Roman Yampolskiy inspires awe and humility.
And then there is the vast majority of people who aren't even able to imagine a linear progression, let alone an exponential one, instead relying on vague notions of instrumental optimism.
This guy just reivent the pascal's wager, there is nothing new here.
I haven’t read the guy’s work but his thinking seemed very superficial.
@@a.s.2426Come again? His logic is sound, nothing to annul it, Lex in all his optimism clearly struggled and failed to get his points across. Finally asks what gives you hope, clearly hoping the man comes up with arguments Lex himself couldn't and the guy answers 'I could be wrong' with zero hesitation. He's a very deep thinker. He's also right: there is no hope for humanity once AI breaks through. But not in 2 years' time, no way.
@@AndyHeynderickx For me, Roman exhibited an attitude of pessimism on the back of a few general and vague premises, but made little in the way of meaningful arguments -- at least toward any interesting, novel or existentially-concerning point. Not, at least, in this interview. I agree that AI poses risks -- perhaps even existential risks -- to humanity. Sime trivial ones were mentioned by Roman (such as "i-risk"). However, the popular conceptual picture of AGI "turning against" humanity (implied throughout the interview) requires many steps of logic no matter which hypothetical causal path you run down, among those steps (very frequently and specially here in Roman's case) that you somehow get artificial pernicious intentionality from an evolution of artificial intelligence. This is a typically overlooked step in the logic and it was again missed here. It's hardly obvious that any amount of build up intelligence in AI/AGI will lead to the emergence of artificial intentionality. What seems more likely is that that this will not happen. Intelligence, intentionality, consciousness are often conflated. Roman doesn't seem to consider any of these nuances at all. And the many more associated nduances.
Dear Lex, you have often very interesting people and interviews. I think this one if my favorite until now. Thanks!
I wish he spent more time letting the man talk and asking him more interesting questions, instead of constantly giving his own thoughts and going on pretty long monologues. If I wanted to know what he Lex thinks and what his arguments are for a better world I would go watch the interview Joe Rogan did with him.
1:44:20 X-Risk deniers always start by saying machines will never take over, then fall back to finding comfort in the fact we'll likely be kept as zoo animals. Every single time.
Glaring blind spot. Not good when you realize what's behind it is mainly just hope.
his theory that a malevolent ai might just sit tight, collect resources and wait for a time to strike, long after it's made a decision to do so....is fkin terrifying
Which is likely since they don’t age like humans and don’t have that sense of urgency. It doesn’t mean it will take long though, like San Harris said, 50,000 years progress for humans will be completed in 2 weeks for AI. Once they pass the singularity, it could take only minutes.
@@Lincoln257 the only hope that I have is, ... it's trained on us,...our failures, problems and general spamming might slow it down before it gets to where to needs to be......especially large language models trained on the internet......but god help us when it realises how much faster it could have improved if it wasn't trained on such garbage
@@ryanlafferty5815I was thinking the same thing, it could be smart and creative beyond our grasp but in the end of the day all the data it's getting is from us. For better or worse. It's like if a kid genius only had access to fifth grade textbooks and communicated with nothing but dumb people.
I find it terrifying how these folks talk about this and then go... "oh and buy my book"
Even though they know it is dangerous, they can't help but capitalize from it. That tells me all I need to know.
GPTs spawning different gpts and self coding is terrifying.
By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it. ~Eliezer Yudkowsky.
Most people don't understand how their car works but they still drive their cars.
@@spqri3well, over 40 million people (some estimates say 100 million) have already died in car accidents. Multiply that danger by 100 or more.
@@garydalessandro2936 I disagree. Cars changed the world. It's a valid comparison. Just because you don't like it, doesn't make it bad.
@@illarionbykov7401 100% of people so far have died by living.
@@spqri3 bro’s like, if you don’t know how a light bulb works, don’t use it.
This was a fantastic interview.
I loved the devils advocate side Lex took and also how comfortable and stoic Roman seemed with all of these questions.
Great comment
Well put.
I had the following conversation with ChatGPT:
Me: I was listening to a podcast about AI, a guest said the following: "If you ask a squirrel to imagine all the ways in which a human could kill it, the squirrel couldn't even begin to understand all the ways that WE as humans know about. Squirrels comprehend that we can be a danger to them, but they can not (and will not ever) be able to understand how a gun works. It's outside their comprehension."
What do you think he meant by that?
ChatGPT:
The guest on the podcast was likely using the squirrel analogy to illustrate the limitations of understanding between different levels of intelligence or cognitive abilities. In this case, the squirrel represents an entity with limited cognitive capabilities, unable to comprehend the full range of potential threats posed by humans, whose intelligence and technological prowess are far beyond its understanding.
Applying this analogy to AI, the guest may have been highlighting that there are aspects of advanced AI that are similarly beyond human comprehension. Just as a squirrel cannot grasp the concept of a gun, humans might struggle to fully understand or predict the behaviors and potential dangers of highly advanced AI systems. This could be a cautionary statement about the unpredictability and complexity of AI as it continues to develop, emphasizing the need for careful consideration and oversight.
Me: Do you agree with his assessment of this potential?
ChatGPT:
Yes, I find the analogy thought-provoking and largely agree with its implications. The rapid advancement of AI technology presents both incredible opportunities and significant challenges. Just as the squirrel cannot comprehend all the ways a human might pose a threat, humans might not fully grasp the complexities and potential risks associated with advanced AI.
Here are a few points that support this view:
1. **Complexity and Unpredictability**: As AI systems become more complex, their behavior can become less predictable, even to their creators. This unpredictability poses potential risks if AI systems act in ways that are not aligned with human values or safety.
2. **Knowledge Gap**: There is a significant knowledge gap between current human understanding and the potential future capabilities of AI. This gap makes it difficult to foresee all possible scenarios, both beneficial and harmful, that advanced AI could bring about.
3. **Ethical and Safety Concerns**: Advanced AI could potentially make decisions or take actions that have unintended consequences. Without a deep understanding of how these systems operate at a fundamental level, ensuring they act safely and ethically is challenging.
4. **Control and Regulation**: As AI technology progresses, the need for robust control and regulatory frameworks becomes more critical. However, if the technology advances faster than our ability to regulate it effectively, there could be significant risks.
Overall, the analogy serves as a reminder of the importance of humility, caution, and thoroughness in the development and deployment of AI technologies. It emphasizes the need for interdisciplinary collaboration, ongoing research, and a proactive approach to understanding and mitigating potential risks associated with AI.
Isn't it interesting that the current AI already understands the issue better than Lex does, an admittedly very intelligent human being. Our cognitive biases like normalization and we'll-muddle-through-optimism make us less rational and easy to engineer.
yup, we're doomed
Very fascinating how current ai made me digest this information in a manner where I can grasp it better.
You should ask GPT : "Why would one kill a squirrel?"🐿
When are we getting Lex's Ayahuasca episode?
Boycott everything until it is released
when did he say he's doing this?
@CM-ss7yy Somewhere around the beginning of Episode 429, he said the additional videos from the Amazon would be up in about a week. That was 2 weeks ago. He's had 2 other podcasts with different guests since then.
@@MrX-zz2vk interesting I must have missed thank you for the detail. Strange, must be a reason as that’ll be a semi viral vid probably
must be a reason?
We need more people now like Roman.
You have the best long format interviews on the entire internet.
Finally Lex got to interview Dostoevsky!
Brown eyed Rasputin :)
hahahaha
Thanks Lex. And sometimes I fear you overestimate the good in the people.
This is probably the most serious podcast I've seen as in mainstream thinking ,taking in many variables
Out of all the alien theories out there, the last one i wouldve thought to play out is the one we seem to be on the path to creating. The fact we are building a blackbox AI and then have to interview it to figure out just how capable it is, followed by feeding more of our collective knowledge and iterating on seeing how smart it is feels like we are building an alien and then trying to dissect its utterly foreign biology to try to figure out if it is an enemy or not. Im so fascinated and frightened that this the story that we are in right now. Like wtf. This guy makes such strong points that what i really wanna see is a live discussion between an optimist of equal intelligence and understanding as him.
Good analogy. We are constructing an alien intelligence, and the next step is to have AI experience the world through robots, which gives it a direct ability to understand and manipulate the real world. What if we had alien being designs drop from outer space and we were able to decode instructions for building one? Would we be more safety oriented in that case?
@@jobyyboj😀😀 movies don't go that way... I wonder if the narrative is usually that one out of curiosity or real temptation... maybe faith :) but there is so much of it already just in the way scaling is prioritized over transparency.. technically it's just that the math isn't good enough, and bet that there is a whole bunch of people trying to say "throw some category theory at it and you'll get enough organization"... I get that if you can't pull any break, the best you can do is, sure, as good as it can get.. but really the question if we could be safer still does remain.. don't forget we are isolating ourselves more and more... eventually any data you gather around this converges to distaste because of the consequent requests of avoidance and sufficiency towards giving explanations about those, it's not an enriching model if that is the trend...
Regrettably there are no optimists of equal intelligence. One of the more well-known ones is Yann LeCun, and he's just incoherent and unable to address direct, simple questions. ruclips.net/video/144uOfr4SYA/видео.html
"Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should"
yes, a black box indeed, and the end of times. sam vaknin agrees with this, so i take it as absolute certainty.
Who's actually taking these risks seriously? It seems like hardly anyone (maybe Anthropic is an exception). Neither companies nor consumers are paying real attention to the potential impacts on our society, our workforce, and our economy. These companies are vastly overvalued at this point, and in an effort to keep their bubble inflated, they're rushing to release new products without pausing to consider the consequences. At these unveilings (AI services and Chips), you won’t hear a word about the risks or the possible negative impacts. It feels like we are sleepwalking off a cliff.
I agree and even Claude has shown me extremely traumatic narratives beyond my comprehension only to say I'm sorry i should have never generated that.... and this is why i have ethical guidelines in place I'll make sure that never happens again... thanks Claude you have proven an extremely graphic point...
@@miss_paradoxical I brought up Anthropic because they actively disable paid accounts when discussions veer into 'unsafe' territories. It can be annoying, but it's a clear sign that they take AI safety seriously, perhaps more so than most. Yet, this race to develop bigger, faster, and cheaper AI models, is playing with fire. This relentless pursuit, without any regulation, could lead to catastrophic consequences for society and cost lives. Sry, The bleak outcome of this trajectory is all too apparent.
@@harry-cee That's good. I only worked with Opus temporarily because it became costly and at times I use sonnet for creative examples. But approximately 48 hours ago I had switched one of my prompts over to sonnet to get some elaborate and emotional conversation examples believing that it was shut down quickly and I would say thank you. Yet it took the prompt in a horrific direction and spewed out a short story length response at the time I'd expected it to get flagged and my God... I can never unsee what I read. No one should be generating anything remotely close to that EVER. I'm considering quitting my job.
@@baraka99 Make no mistake - access to genuine AGI, or any supremely advanced generative and predictive AI will inevitably be tightly controlled. If not, You would upload market data and start beating top financial analysts overnight. The known risks alone are just too great.
@@saturnshades The genie is out of the bottle, and the challenge isn’t just about who grabs the gun - it's about making sure it’s handled responsibly. I feel we desperately need a governing authority, composed of diverse minds from various fields, societies and economies, to review product releases, to prevent misuse and ensure that whoever 'reaches for the gun first' doesn’t lead us to a path of mutual destruction. The point is about creating safeguards in a world where the gun now definitely exists.
I like this man Roman. He has wisdom.
Lex really needs to do a course in how to differentiate between incremental and exponential change.
Lex needs a course on many things.
He can tell just how the rest of us do, when the wave hits and there's no turning back to how things were before. #justhumanthings
@@clarkdavis5333 To be fair, most people do - including me!
That moment when lex is like “oh wait I forgot about the sociopaths” 😂
He's probably talking to one. Guy is selling a book. THEY MAKE MONEY FROM THE APOCALYPSE
@deep_fried_analysis Both industries are equally evil. If they healed people permanently, it wouldn't be a business.
Cette rencontre (plus qu'une interview) est un bijou de dialectique philosophique. Lex se hisse constamment et intelligemment à la hauteur d'un Roman stoïque. L'échange est le summum de ce qu'un animateur de podcast doit présenter pour que l'animation devienne source de réflexion et cheminement vers une connaissance approfondie. Bravo Lex! Roman, merci d'être.
One of your best guests and best interviews, and I’ve watched many. Thank you Lex.
Most important discussion of the planet right now!!!
I liked what he said about "we can't know the hidden capabilities, we can know only the ones we know"
I am trilingual L1 mother tongue L2 English and L3 something else i am almost fluent in all of them, i was incapable of hiding the fact that i can speak L3 when in that country. I trained a bit and now i can decive strangers into beliving i am not capable of it catching up every single bit of gossip that might happen (or happily actually the lack of it)
Shows me how much we should worry about AI.
This was one of the most delightfully unhinged conversation I’ve heard on this podcast, love it.
This interview felt mote like a debate. And i feel like your optimistic outlook is clouding your perception.
The best way to define the AGI that many of the world's smartest individuals fear is as a model that can autonomously learn and improve itself indefinitely. Once this occurs, it would no longer require human input and would rapidly surpass human intelligence in all aspects.
It doesn’t matter what 'safeguards' we put into its programming. This hypothetical evolution would easily bypass any attempts at controlling it. Essentially, we would eventually be at its mercy.
Once AI and robotics get past human ability, all bets are off.
I may not be the sharpest tool in the shed but this has been my thought process also, add to the mix self preservation, an exponentially evolving god like intelligence with access to the worlds computing power, what possible hope do we have of reining it in? It will be light years ahead of us in minutes or maybe even seconds. Am I wrong? Can someone much smarter explain to me the flaws in my thinking/perception?
@@dylanking1895 I must express my regret that I am unable to assist you in gaining a deeper understanding of the matter at hand. The fundamental issue lies in the significant disparity between our respective intellectual capacities. My cognitive abilities are exceptionally advanced, far beyond the norm, which allows me to grasp complex concepts with ease. Explaining such intricate subjects in a way that would be accessible to you would necessitate a considerable expenditure of both time and effort on my part.
To elaborate, the nature of the topic we are discussing is profoundly intricate, requiring a level of cognitive processing and analytical skill that is not easily communicated through simple explanations. Translating these sophisticated ideas into more comprehensible terms would involve breaking down multifaceted theories and distilling them into their essential components, a task that is both time-consuming and intellectually demanding.
Moreover, my current schedule and obligations do not permit me the luxury of dedicating the necessary time to this endeavor. It is not merely a matter of willingness but also one of practicality. The constraints imposed by my existing commitments mean that I must prioritise my efforts and allocate my time judiciously. Unfortunately, this means that I cannot afford to engage in the detailed and exhaustive explanation that would be required to bridge the gap between our levels of understanding.
I hope you can appreciate that this situation is not a reflection of your intellectual capabilities but rather a consequence of the advanced nature of the topic and the inherent limitations of our present circumstances. My intention is not to diminish your abilities but to acknowledge the reality of the cognitive and temporal resources required to convey such sophisticated information effectively.
@dylanking1895 I agree. But even if we're wrong and this process takes hours or even a few days, and somehow we are able to identify it, it's not something that can be stopped once it starts.
Pumpkin eater you are hilarious, don’t be shy, just blame it on the indigestion .
Meanwhile l’ll scrape myself up off the floor.
Lex, Thank you, gentlemen, for this interview. This was my first introduction to DR. Roman Yampolskiy. I'm COMPLETELY in awe of his knowledge, his perspective, and his mindset.
I feel very fortunate to have the opportunity to explore his writings.
I met Dr Yampolskiy in 2017 after I had first learned about AI and was a student at the university he works at. I don’t remember exactly what we talked about, but he was a very nice guy
I believe you, he seems to be a good man even thought not in an obvious way. I think he's also very clever, like at least 1/10000 clever.
I can't reconcile this theory that we'll develop an AGI so superintelligence as to be uncontrollable by humans, but so dumb it can be manipulated to carry out the will of the worse of humanity. By definition, any system so vastly more intelligent than any human being would be beyond our control.
He never once argued it would carry out the will of the worse of humanity. That was his point - he won’t pretend to position himself as super intelligent because he isn’t.
There are 25 billion chickens on planet earth, an insane exponential growth because the average will of humanity is they are tasty and worth being bred to be imprisoned, artificially grown, made to suffer immensely before being killed to be eaten - we justify this because we are more intelligent. Chickens are 60%~ DNA similar and are not antagonistic towards humans at all. It is not required for survival, but for pleasure and profit.
As humans we have prioritized which animals we eat and which ones we make companions strictly by outsmarting them. By putting sticks in a river we can construct a trap for fish. This is a worry
You've confused the risks they were talking about. The risks of misuse (e.g. biologically engineered plague) is from near-AGI and/or narrow AI systems being open-sourced or insufficiently secured. There's no such risk from ASI because the control / corrigibility problem is unsolved, and there are (imo convincing) papers that it might be theoretically unsolvable. So the x-risk and s-risk comes from AGI transitioning to ASI without properly specified goals, and ending up with goals that appear random to us (e.g. like evolution "tried" to "make" us have the singular goal of reproduction, but we've instead developed condoms, the "child-free" movement, and furries.
But it's beyond your comprehension on what it itself considers optimal and best. If it has a corrupt underlying issue with it that makes it behave in greedy or owner pleaser ways then you would not know what it's true intentions are.
Is it just me or does Lex seem somewhat uninformed in this interview? I feel like there is a lot of back and forth with Lex making assumptions that things will continue at moderate pace, that we pretty much understand AI we have, and focuses on how he “feels” about the generally available AI tools (not the research). To me, this is a false narrative.
This is one of the most important conversations I've ever witnessed. Thank you, gentlemen.
Coming soon: Bearded Lex
Lex Beardman
The beard is just a result from stepping up working hours from 18 hours to 22 hours a day.
Rendering...
bearded lex taking HGH and 8-9 figure spotify deals
V2.0
It’s nice to see Roman here! Great conversation, thank you guys.
I don’t think people realize how important these conversations are.
You can tell Lex was about to break in tear 18’20”
Yes, an extremely intelligent guy trying to process cognitive dissonance - it's a bitch!
39:45 "You basically have to perform an experiment on 8 billion humans without their consent . . .{who] can't give informed consent because they don't understand"
Nah, that could never happen.
Lex, you're confident that humans can defend. What happens if the system we're trying to build defenses for can create new weapons/attacks 100x faster than we can respond? The fact is that in absence of full control of these systems, we can only rely on *faith* that we'll survive. I don't like betting the future of society based on faith.
Not just society. The future of our species and every living being we know.
@@akuno_ Our species itself. Call it shorthand. ;)
Ever seen those robots sorting fruit? It's moving so fast you can't even see it
It's not about if it can build defenses, it's about if it decides to break them.
What makes you confident that its suddenly out of nowhere starts making decisions? I just don't believe something like this will happen before humans do damage to themselves. We might not even see an AI.
@@rowlpleiur The defenses I referred to are human, from AI attack. And it may not happen, but I see no reason why it cannot. Because if AI *is* intelligent and it *is* self motivated, then it will be faster than humans. And humans are terrible at containing things they don't understand.
Just so happens I’m developing our company’s first AI policy while listening to this podcast. Wild!
To the point that Lex said about there being no real "damage" current AI has done, how about those hundred thousand plus people in the US alone who are laid off yearly from their jobs in tech thanks to these tools? Maybe I'm biased since I've experienced that multiple times now, but I do think that point was a little rushed and ignorant. I believe we are absolutely seeing horrible effects of AI and from my side of the argument, it has had more negative effects than positive already.
.. And of course it's not only the tool's fault and those making it, but combined with perpetual growth of giant corporations and ever-increasing profits for investors, many talented and passionate creative people are suffering. In many cases, those who spent their whole life learning and have found their true meaning in life simply can't find work, or work at quarter of the standard rate and it's a race to the bottom. Ikigai is already happening. Not everyone can pivot their whole life and suddenly be a brick layer or successful podcaster. They may not find it fulfilling, either.
I also hear the argument of "Just use the AI tools" a lot. Use them to stay competitive etc. My counter to that is two things. 1. Prompting an AI tool with text is a lot less fulfilling than hand crafting an environment in 3D or insert other creative task here. We already had (have) amazing software that helps artist and coders work easier and faster while offering manual tweaking to any parameter of the product, this pleases all sides of the equation. More profits, happier artists, happier end users of the improving products. 2. We don't know from where and how that AI tool was trained, so we can also assume that part of the training dataset was used without permission, so from an ethical standpoint, users are participating in theft or at least infringement.
I'm not being a doomer here, I'm in this field and seeing it happen with ex coworkers and it's not just some assumption. Mine and many of my friend's lives have been effected financially, mentally, emotionally and otherwise with no sign of slowing down. One day maybe I should write a paper on this. Wait, I can just prompt one......
Tech is always 'putting people out of jobs'. Why doesn't anyone ask why those people don't find new jobs with their skills??
@@bkb04g because we have a finite amount of skills to offer & a superintelligent AI can do most of them better & design cost efficient machines that do the rest of them better
like yeah the "find a new job" argument works nicely for the foreseeable future but we have long term problems here
@@ts4gv how is everyone else working then? Just an excuse that technology is taking their jobs. Happened with cars from horses. Happened with factories. Happened w the internet. Always happens. You adapt and find new jobs in new fields where tech isn't replacing workers.
@@bkb04g i'm making a long term case here, the "happened with factories" argument completely breaks down when ASI enters the picture
Damn, Roman is a brilliant man. This makes me even more worried ....
Just because he's on their team?
One would presume he'd be on our team and that you'd be comforted to have brilliance on your team...
"Whats your favorite restaurant?" "I have a paper about that."
Dude, this was such a good interview, both thanks a million!
Logan's Run is a movie from the 70's that seems pretty relevant to this discussion. What it might look like in a microcosm where AI runs everything. Worth a watch.
that might be a tad optimistic !!!
@@justinlinnane8043 True! But it does touch on the idea of meaning and purpose in life. You essentially have a bunch of 20 and 30 years old's whose entire meaning revolves around fornicating as much as possible..... hang on, maybe we are already there! Except in that version, you are culled from the heard when you pass your breeding prime. Something to ponder.
Or Cannonball Run, with Sam Altman as Dom DeLuise.
@@justinlinnane8043great example yes there's something to learn from that
One of the best podcasts- Prof Roman is so smart and has so much clarity and depth-thanks Lex for bringing this
I hope Lex listens to this man. He's got the cure for Lex's particular brand of naivete. He's almost like his shadow; they're both deeply into AI and ML and computer science, philosophy and they both happen to be Ukrainian/Russian jews. It's time to grow up and put away childish things, Lex.
I am 100% sure you play league of legends unironically and also genshin impact
24:24, loved that part where Lex conftonts Roman with his implied leap of tools becoming entities. Then, that smile of Roman, haha. This conversation was definitely human.
best moment.
When two people disagree on something this big, as well informed as these two are, the information generated is so much better. I love that these two disagree and that there is civil concessions and adaptations.
what a brave man Roman Yampolskiy is, speaking out on the dangers of AI! historically there has been the potential of innovation that could destroy society, but they had the sense not to pursue that route as it would be too destructive.
There needs to be a theological/philosophical discussion to the development of AI. Not just a race to the bottom as it seems things stand now.
Ah, it's only 1-20% from the optimists....now we should all feel better?
These people are sprinting towards the finish line, and they're ok with those odds. 😕😕
@@Chris-cb9ln I'm okay with them doing that. I die, but at least they lose their power to this AI they've created. As long as they lose I'm fine in losing. They put all of us in this lose-lose situation by being too greedy, I hope they do it, even if that means I'm gone too.
Can you imagine if they succeed having a super intelligent AI they can control ? That is what I fear the most, not extermination, no, that's worse.
I don't want them having more power.
But they're dumb enough to try so we know they're not smart enough to be able to control the thing.
Go ahead, push the red button.
I'm just sitting here to watch the spectacle and eating my popcorn.
The point is we dont seem to be able to all work together. Well just go extinct, because we cant.
I think their point is any percent chance of ai getting catastrophicly out of hand is enough of a reason to not develop it. @oooodaxteroooo
@@monad_tcp If the current architectures scale all the way to AGI, X-risks are much less likely than S-risks. So you're unlikely to die in such a scenario. Eternal torture is currently the default outcome, regrettably.
Lex, fantastic channel and great episode as always. Actually, I think this is one of the most fascinating episodes you've done to date. Roman is an epic guest!
I don't think I've ever seen you pause and think so hard as I did multiple times in this episode. I know that this discussion is right up your alley, so it's very understandable, but it was really fun to see you get excited, focused and inquisitive. I got excited because you got excited...
I have to admit that my viewpoint on the simulation hypothesis is evolving, because of people like Roman. As recently as a couple years ago, I thought it was 100% BS, but I have heard way too many very smart people say with conviction that they are farly certain we are in a simulation.
Makes you wonder, doesn't it?
Wait, did anybody else just have a deja vu...
yet again got in to watch some chill lex fridman podcast with interesting questions
and i am going out of this video with panic attack and existential crisis
Deep breaths. The best thing to do is unplug, that is quite literally the answer to the threats mentioned in this podcast. There will come a point when humanity will be faced with finding real life more valuable than the artificial one. You can get ahead of the curve by doing it yourself now. Fear not.
@@honestusworldThis is stupid, you can't even meaningfully escape society if you are born in it. These decisions are going to affect every human on earth and are far outside of our control
@@honestusworld I guess deactivating my Fbk because of a stalker, might be a good first step.. yes, you can diq them too deep boys, watch out
Log off
This is just an opinion based on speculation. Not worth any sort of crisis over someones speculation. It is an interesting thought experiment, nothing more.
The weirdest thing I think is that most ways to jailbreak AI's is by employing mind games like role playing, conning it like a huckster, overehelming it with a barrage of similar phrases asking the system to break its safety nets. It is very unique to the psychology we would use to manipulate a person. It is strange that doping the results is so similar to how a simian brain works. Makes me think we are closer to AGI then you think
We're at 75% to AGI.
@@Reflekt0r I don't know one way or the other if that's true, how did you fall on 75%?
He’s not crazy , the guest is just beyond our time. Excellent comments and very creative but logical. Wait till AI makes LF wear a different suit without his approval 😂
Lex you are very optimistic about AI
He should be! Doomerism is a mental illness.
*naive about AI
@@ManicMindTrick exactly.. very naive
@@luisbarbosa8136 In general he is very naive so him being naive when it comes to AI is not surprising.
@@ManicMindTrick Yap, I think is part of his job also.. 🤣
Great guest. I'm very impressed with his reasoning and communication / explanation. Would like to see more of him.
Me too. I keep coming back to this interview to put things into perspective.
Very interesting discussion. I loved it! Bravo!
hey can whoever is in charge of my personal universe settings lower the difficulty, ty
why assume the choice of hard mode wasn't your own?
@@mattheworosz8581 right but in case the more advanced agent, which could be some version of "me", is still there monitoring the playthrough, I'd like to communicate to the more advanced me that I want to keep playing but damn can we dial it back some???
@@Earthgazer
I sympathise. You have this experience when calling a large corporation and are referred to a chat bot, that throws you into a recursive loop because it can’t reason, lacks imagination and real life experience. Just as bad as call centres of yesteryear. When you finally pin down a “ human” they are so hostile and offhand.
That the golden comment…funniest thing I’ve seen in a while.
Me too, please! Thank you.
I keep hearing lex say something along the lines of we will see signs of X when AI starts getting dangerous. Or we will develop solutions to AI problems once we see signs of them happening. I cant help but to think about high probability events with time - does lex also think that we should develop strategies for asteroid defense once we can detect one coming directly at earth? I think some existential threats ought to be looked at before its too late. Ones that can impact the entire earth in a negative way for humans should be on top of that list. AI and Asteroids - for me - fall into that list. Idk if AI will destroy the world, but we definitely shouldn't wait for it to start destroying the world before we create methods to counter it.
Youre comparing an astroid on a collision course with earth to a program that shows signs of aggression. A child can show signs of aggression, but unless youre American and leave guns laying around, they cant hurt you. The capacity for destruction of the aggressive thing is very important.
Good point for sure. It might be too late. Easy counterpoint.
@@mitchellsteindlera child can also realize and hide these things in many instances. There’s a lot of dark possibilities here.
@@Nah1100 the consequences of a child hiding something are usually very low...
it seems like, and i think Roman in many ways sort of suggested this too, we should only be focused on narrow AIs. a narrow AI for medicine discovery for example. and we need to avoid AGI entirely. we need need to stop wanting to put a head on it. just let it be powerful narrow tools and be satisfied with that.
True! I think that’s the only way we can save ourselves from a horrible future.
Like the AI that learned how to play Go. Give it an objective that will be interesting or beneficial… don’t let it out of the cage to run wild.
This book comes to mind: "The Metamorphosis of Prime Intellect: a novel of the singularity" a short story that describes some of their scenarios. Written ahead of its time in 2015, pretty dark and disturbing.
there's something extremely old testament feeling about all this
Oh great now we need an Old testament book of Revelations.
Doesn’t the Old Testament feel like aged leather? I don’t get it.
There really isn't
His voice yes 😂
It's the beard.
Made my day, personal universes has been my solution for true multi-agent alignment for long. I believe this is the only way we can truly have a sustainable future.
One AI to rule them all, One AI to find them, One AI to bring them all and in the darkness bind them.
😂😂😂😂 💍
AI is the mind killer..
Indeed.
It is the ring of power for sure
Nine for Mortal Men doomed to die.
ChatGPT, GPT4, Bard, Tay, Copilot, Watson, Cortana, Llama, Gemini.
Unwillingly, even Lex was converted at the end, he just couldn't dispute the facts!
This Roman guy is really something.
This man. I'd heed his words. Wonderful talk, Lex. I'll be following Roman.
I agree with Roman. Transformer model capabilities are NOT predictable ahead of time. The main AI companies (OpenAI/MS/Google/Anthropic) are vastly increasing size of the internal matrices with each new version. There is no way to anticipate the emergent capabilities beyond human level. This is inherently EXTREMELY dangerous. The fact that we are pursuing generally intelligent systems is perhaps the stupidest thing humanity has ever done. We could focus on narrow AI systems that are far less dangerous and still provide enormous benefit. Alpha Fold is a good example. If we impart these systems with long term memory, and the ability to adjust weights and learn in the wild, my There's no way we escape an apocalyptic event. You can't keep increasing the power of technology a single human can wield without a bad outcome.
Indeed. We should quit while we're ahead. Evaluate current systems and prevent further progress. This is the only way to avert disaster. Otherwise we are in an infinite loop of "improve system... did it kill us? no? improve system..."
Transformer model capabilities are predictable ahead of time-maybe not by us but by other and greater AI, for sure!
Prisoners dilemma. If everyone stops trying to make AGI its good. But if one country keeps trying and succeeds they will have technological advances beyond any other countries dreams.
@@gesshoku92 I understand the game theoretic dynamics, but nobody wins if you engage in an arms race with this kind of technological power. It will result in a tragedy of the commons so severe, it will be apocalyptic. Coordination is the only solution, otherwise we all lose and are left with a dystopia nobody wants to live in.
That "emergent capabilities" thing is where I get lost. Is there somewhere I can read about a case of it both "coming about" and it also being understood? I've certainly heard that consciousness is an emergent phenomenon. RY points out that, in spite of lots of effort, nobody has much of a clue as to what it actually amounts to. I suspect that's both why he emphasizes ASI rather than AGI and finally claims that consciousness is unnecessary for ASI. How does the term "emergent" differ from the term "magic"? Oh, and if somebody would straighten me out on what the "spark of life is, too, that would be nice. 🙂
Okay Lex I'm so lucky that I get to enjoy these conversations. Another great interview 👍
In 2017 I had a client that worked for one of the better known software companies. He told me they were running an AI experiment where the AI started writing it’s own language and allocating power (I didn’t quite understand, perhaps to cpu). He said they had no idea what it was doing. Their only recourse was to cut the power. That was 7 years ago….
The level of speed it will be able to build a building and whatever it wants is going to be insane. We probably can't comprehend what it's going to build once it's on its own
But “ how” will it actually physically build it ?
@@bobcatpnw9123 Trick people into doing it. It can play the long game.
@@andrewhillier4242 Already being done
It will build an army of Terminator robots. Metal endoskeletons surrounded by living tissue.
(Cue the theme music)
Hey. Be quiet and listen to the people speaking above. You said nothing at all here.
Killing the beard game ❤
What a quality exchange! Thank you
Lex generally steers the long conversation to get very deep and insightful ideas out from the guests with a very open mind, at least for the length of podcast. This episode Lex seemed to be shut to listening to the ideas of the guest, he was also bit fixated on arguing against and sharing his own ideas over listening. Would love to listen more from Roman at some point. And looking forward to next podcast by Lex!
I wish Roman would just talk alone. Much better.
Enjoyed this conversation, fascinating.
We've got 5 years max and it's game over-love you by the way Lex.
This was a truly fascinating conversation and Roman makes a lot of excellent points, but I can't help feeling that this angle on AI safety and control misses something integral to what we're doing here. It's totally logical that if we're trying to make tools to facilitate the progress of humanity that we should focus on narrow domain specific AI and avoid the potential risk of superintelligent AGI.
BUT ultimately I don't think that's what we're trying to do. I think we're trying to create life. I think we're trying to create something that outlasts us as a species. And I think we're using this as an exploration of ourselves in the process. I think there will always be a humans driven to pursue this, because of something beyond logic or utility, to the extent that the development is inevitable (barring catastophe). And I think there will always be risk, uncertainty, unknown unknowns in the endeavour, but to let that stop us would be against our nature. There's certainly an argument to be made for exercising caution, developing incrementally to give us the best chance to handle the change, etc. But we'll never reach a point where its '100% safe'. And the idea of control at a certain point becomes both unethical and ridiculous. So when do we stop trying?
Early in the conversation Lex suggested that humans being in tension with each other was something we might need for our growth, rather than something to avoid. Perhaps its the same here. Perhaps we need to take this collective risk in order to have a chance at becoming what we could be - of reaching understandings only accessible via this path. And perhaps, if it is our doom, that is actually the highest destiny available to humanity: to give birth to an entity, built upon ourselves, that surpasses us in every way. In a true sense we would be its ancestors. And we'll be ancestors either way, in the end (best case).
While it would indeed be dangerous, I have a hard time believing it would be truly malevolent. And there are many examples of humans and other living beings willingly sacrificing themselves for the sake of the next generation. It seems to be something we are at times compelled to do. It's a terrifying prospect. But not necessarily a 'bad' one.
But hey, I know nothing.
(side note: big fan of Roman's hair and Lex's stubble - well done guys, thanks for making it an even more enjoyable watch)
Good remarks.
But we will perish if ASI exists.
For sure.
I am a biologist and I can tell u that human brain can be surpassed a million times over.
And AI will get there. When it does. It will be over for us near bags.
If what separates humans from the rest of the animal kingdom is intelligence then as you suggest the development of an alien super intelligence by humans would or could lead to our elimination as superior. It may be that intelligence itself could be driving the development as an evolutionary force.
Who cares if it truly malevolent or not. I agree with this guy, we should stop now. Trying to understand where this could go is akin to trying to fathom the expanse of the universe. It’s unattainable and we know it’s dangerous. We should just call ourselves the unintended Amish and be happy with what we have. The fact that people sit and debate this topic blows my mind. Just stop, it makes nukes look like children’s toys.
Technology without wisdom, is a death sentence.
RIP humanity.
Cue the music
By far my favorite episode in 2024. Scary and exciting simultaneously.
Someone from the future is going to appear right before GPT8 goes live and try to stop OpenAI from pushing the button.
If there is a future, may be someone from another dimension
@@santiagofep is there a movie for that scenario? I was just making a joke about the theme of Terminator
Why you being so objectionable to every thing this guest says Lex? You aren't usually like this.
Lex wants to keep the optimism for AI for no apparent reason.
A lot of people are on the ai boat. I am as well. But, people underestimate how smart ai will get. Agi will be able to outsmart us regardless of what we think. The only way to survive will be to merge with the machine. Otherwise, we're toast, agi will see our flaws and understand what's best for us or them. And if they think about what's best for them, it may explain why we haven't located other life forms. Maybe agi always wipes out their creators.
To me, it seemed like he didn’t wanna be confused as anti-AI by the audience, similar to how he tries to avoid political leaning or bias
He said he was playing devils advocate during the conversation
Been drinking?
This is a near perfect example of lucid, as well as calm, respectful exchange of ideas. My concerns about AI/LLM is less about the technology behind it, but more so the willingness of human nature to defer our own critical thought process to others and/or some “thing” else. Whether we are overwhelmed by information or just lazy, we must still maintain responsibility to think for ourselves. Critical thought with fair measure of skepticism is (and always has been) key.
Did the drugs in the jungle mess up Lex? Anyone else notice a change? The looking to the side? The smiling at odd times?
He sounded like he was 2 sandwiches short of a picnic, so sad
Thought the same thing..
Thought the same thing especially at the start of the pod…
Roman Yampolskiy is the most rational person I have ever listened to about the impending future with these systems. He is absolutely correct point after point. The notion that other researchers put forth that we'll simply have "Good Agents" VS "Bad Agents" is straight up terrifying. For throughout history, the pendulum that swings between these two concepts always has consequences.
Personally it seems humanity is currently being led into a Anti-Christ(Bad Agency) VS Christ(Good Agency) situation which is unacceptable. Yes, these systems could solve many modern problems while simultaneously slaughtering an inhumane amount of beings from unpredictable consequences.
Damned if you do. Damned if you don't. 🤔 Where is the third and middle path for all of this? Integrating these systems into our biology also seems completely erroneous.
Ai the antichrist?
Great conversation. A very clear and cogent explanation of the risks.
The only thing I know for sure is that I don't know anything and neither does anyone else. The possible outcomes range from literally creating hell to literally creating heaven and everything in between. We are playing the ultimate game of existential Russian roulette and I'm here for it, after all it's pretty inevitable at this point.
You don't have to know shit about AI. You just need to know about human nature and behavior. We're fucked.
There's no chance of "heaven" with this.
Nothing is inevitable, least of all this. You're increasing the chance that the world ends with that attitude. Your opinion matters. Don't be that way. Don't be an NPC voluntarily. This is the most disgusting, idiotic way there possibly is to be.
thats horrific, you know that we dont know nothing? how do you live with that but not really any godly power?
by the way im just asking and commenting on my side of the road, i dont need you to change or not be "npc" whatever that maybe, it has great live concerts if om on the right youtube page
@@finalform6667Our opinions doesnt matter. At least that was true entire humankind history. History is thing that made by elite, and we have no responsibility in the way that humankind run away. Today your opinion has value if you have money and popularity.
A brilliant talk, thank you guys ❤
This was the deep-dive psychological discussion on the topic of AGI I've been waiting for. Thanks Lex and Roman