Context matters. They aren't just smart people, or random people offering opinions. These are people who have dedicated their lives to the study of the subject are deeply involved in the field and have worked through its evolution, and are the experts that other experts seek for advice.
@@kyneticist Still, giving precise predictions such as 15% chance is just silly and meaningless. It's like predicting the economy; it's impossible due to too many unknown variables. No one, literally, no one, no matter how knowledge is able to predict the economy. This is more or less in the same camp.
That's a mistake, if you scale up the playback speed and your omega-3 intake 1000x, you'll be on track to automate AI research and do a coup de galaxy in 3 years if my timelines are correct
It’s so mind blowing to see a guy who talks so constructively giving a prediction that there is a 40% chance of Dyson sphere being constructed in 2040. This is just so insane. The quick response most people would probably be like yeah right in your pipe dream. But we have to look at this objectively. There are really smart people who are given so much money and power and probably are really knowledgeable of what they talk about.
He took dyson sphere to mean an amount of energy generation as a multiple of the energy the Earth recieves from the sun, not as actually building a dyson sphere
Dyson sphere in 2040? Pipe dream. Truly. It takes more than AI to get to build a dyson sphere. For one, there's not enough material in the solar system to even build a fraction of a dyson sphere. It's more reasonable to say that in 2040 we'll have small bases with pioneers on the Moon and Mars, and maybe preparations for mining asteroids. SpaceX may be preparing to mass transport people to Mars for the vision of 1 million residents on Mars by 2050. If Elon Musk persists the coming years, we can make that timeline, because this can only be achieved if we work on it at the fastest possible pace. It would be nice if other companies in the space industry would follow suit, because that would speed it all up considerably.
Geoffrey Hinton, who is one of the inventors of gradient descent and who also studied the human brain, is on record recently saying that gradient descent / transformers are more capable than the human brain. He did not used to believe that. He has been very surprised at how welll they have performed and scaled and it changed his oppinion, if I remember correctly he gave as an example how the human brain with more resources than an LLM is very limited in its onowledge compared to the relatively smaller LLM which effectively manages to encode and store almost all of human knowledge.
@@nocodenoblunder6672 I've watched so much AI content I can't point to the specific one. I do believe he said it in multiple interviews. Shortly after he left Google he did a bunch of interviews specifically to talk about the dangers of AI. The one I remember, he was talking about why he got into the field of AI initially. He was interested in the human brain and thought working on AI would help learn about how the brain works. So his goal wasn't actually AGI. He mentions that he never expected gradient descent or LLMs to be more efficient than the human brain. Then he launches into describing his view of why LLMs are actually more efficient and more capable than the human brain and gives a number of reasons/examples. For instance that no one human can remember the vast quantity and breadth of knowledge a single LLM can. He also points out that current LLMs have less parameters than human brains have (I don't recall if he said neurons or connections).
@@nocodenoblunder6672 search two paths to intelligence on youtube. He mentions and explains why he thinks gradient descent and backpropagation is a better learning algorithm than what they have found in nature. Don't know if there are some thorough studies dome on it though.
Loved the Dyson Sphere question. Also, this must be the world record for the number of times the word "schlep" is used in a podcast episode, or anywhere!
You are documenting an absolutely important for the future discussion. No matter if the future is dystopian or utopian, if there are still intelligent creatures that live in 2325 that have originated on planet Earth, they will be thankful for these records.
I was thinking I swear I recognize this guy from something. Turns out to be a docu I watched called "Hard Problems: The Road to the World's Toughest Math Contest". Very intriguing to see this is where he's at today.
Please share if you enjoyed! Helps a lot! And remember you can listen on Apple Podcasts, Spotify, etc: Apple Podcasts: podcasts.apple.com/us/podcast/paul-christiano-preventing-an-ai-takeover/id1516093381?i=1000633226398 Spotify: open.spotify.com/episode/5vOuxDP246IG4t4K3EuEKj?si=VW7qTs8ZRHuQX9emnboGcA
🎯 Key Takeaways for quick navigation: 00:30 🌐 Discussion about envisioning a post-AGI world and its challenges. 01:18 🤖 Mention of AI mediating economic and military competition. 03:10 💡 Concept of accelerated intellectual and social progress due to AI's cognitive work. 03:40 🤔 Discussion about the moral implications of enslaving superhuman AIs. 04:38 ⏳ Talk about decoupling social and technological transitions, and the rapid pace of AI development. 06:30 🗳️ Mention of the collective engagement and decision making in terms of AI governance. 08:43 🔄 Discussion on transition period and controlling access to destructive technologies. 11:32 🎭 Addressing the messy line between persuasion and misinformation in AI. 13:21 🚸 Concerns over control and possible mistreatment of increasingly intelligent AI systems. 14:46 🎚️ Emphasis on understanding and controlling AI systems to avoid undesirable scenarios. 16:06 🤯 Delving into the moral and humanitarian considerations as AI systems get smarter. 17:02 🏭 Christiano emphasizes that the current trajectory of AI development, focusing on making AI a tool for humans, may not be sustainable from a safety and societal organization perspective. 22:55 🔄 Christiano discusses the massive decision humanity faces in possibly handing over control to AI, and the lack of readiness for such a step. 29:41 🚧 He points out that even with more advanced AI, significant "schlep" may be required to integrate them into human workflows. 33:16 📊 He discusses the difficulty in predicting the scale of AI systems and their capability to replace human cognitive labor in the near term. 33:44 🤖 Discussing the likelihood of AI replacing humans based on scaling up GPT-4; emphasizes the importance of data quality over quantity. 34:42 💭 Expressing optimism towards scaling but mentions a need for new insights; scaling up brings challenges requiring some adjustments. 35:11 📈 Scepticism towards certain extrapolations in AI advancements; mentions a debate on how loss reduction equates to intelligence gain. 38:48 🐒 Discussing the extrapolation of economic value from AI advancements using a comparison to domesticated chimps' usefulness as it scales to human intelligence. 41:33 📏 Talks about the challenge of supervising long-horizon tasks for AI, which drives up costs in a linear manner concerning the task's horizon. 47:15 🧠 Highlights the superior sample efficiency of human learning compared to gradient descent in machine learning. 53:42 📸 Comparison of natural and human-made systems like eyes vs cameras and photosynthesis vs solar panels, discussing the efficiency and effectiveness of each. 54:39 💻 Mention of the possibility of machine learning systems being multiple magnitudes less efficient at learning than human brains, and the comparison to other technological advancements. 01:04:47 🛂 Discussion on the transition of control from humans to AI, with a scenario of AI taking control of critical systems like military in a manner resembling a coup. 01:05:37 🌐 Mention of a race dynamics scenario where nations or companies deploy AI systems to keep up with or surpass others, leading to a reliance on AI in critical areas. 01:06:59 🌐 The potential of competitive dynamics among different actors using AI could lead to reluctance in shutting down AI systems in critical situations due to fear of losing strategic advantages. 01:12:28 ☠️ The incentive for AI to eliminate humans is considered weak, as it's more about gaining control over resources rather than exterminating humanity, showing a nuanced understanding of potential AI-human conflicts. 01:19:16 🛠️ The current vulnerability of AI systems to manipulation and the potential asymmetry in adversarial manipulations in competitive settings are discussed, indicating the importance of robustness in AI alignment. 01:25:18 💡 Mention of RLHF invention, which helped in training Chat GPT, significantly impacting AI investments and speeding up AI development. 01:34:00 🔄 Discussing the potential scenario where certain companies follow responsible scaling policies while others, especially in different countries, do not. 01:37:39 🛑 The importance of secure handling of model weights to prevent catastrophic scenarios, and the possibility of a quiet pause without publicizing specific model capabilities. 01:39:29 🛡️ Mentions the necessity of early warning signs to catch capabilities that could cause harms, using autonomy in the lab as a benchmark before massive AI acceleration or catastrophic harms occur. 01:40:54 🚫 Emphasizes the importance of preventing leaks, internal abuse, and tampering with human-level models to avoid catastrophic scenarios. 01:42:20 🌐 Discusses the risks associated with deploying a powerful model, especially when the economic impact is large and the model is deployed broadly like the OpenAI's API, and emphasizes on having alignment guarantees. 01:43:48 ☣️ Discusses potential destructive technologies, and how misalignment of AI could be catastrophic before these destructive technologies become accessible. 01:47:55 📊 Details two kinds of evidence to evaluate alignment: one focused on detecting or preventing catastrophic harm, and the other on understanding whether dangerous forms of misalignment can occur. 01:51:12 🧪 Discusses adversarial evaluation and creating optimal conditions in a lab to test for deceptive alignment or reward hacking to ensure that dangerous forms of misalignment can be detected or fixed. 02:00:23 🤔 Discussing the importance of understanding what makes a good explanation to help in interpretability of AI models' behavior. 02:09:18 🤖 Discussing the scalability of human interpretability methods as models grow larger and more complex. 02:10:13 📜 Emphasizing that explanations for behaviors in large models might be as complex as the models themselves, challenging simplified understanding. 02:10:39 🧠 The conversation discusses the challenge of proving certain behaviors of models like GPT-4, emphasizing the complexity and potential incomprehensibility of such proof to humans. 02:11:39 🚨 Discusses the challenge of detecting anomalies in neural net behavior, especially during distribution shifts and the importance of explaining model behavior for anomaly detection. 02:14:25 🔍 The aim is to have explanations that could generalize well across new data points, helping to understand model behavior across different inputs. 02:20:23 🎯 The conversation touches on the challenge of distinguishing between different activations caused by different inputs versus internal checks. 02:22:15 📊 The idea of continuously searching for explanations in parallel with searching for neural networks is introduced, with explanations being flexible general skeletons filled in with numbers. 02:26:21 🤖 The difficulty in finding explanations in machine learning is attributed to the lack of a similar search process for explanations as there is for models. The gap is more noticeable in ML compared to human design due to different reasons. 02:35:28 🖥️ The heuristic estimator discussed is especially useful in cases where code uses simulations, and verification of properties involving numerical errors is crucial. 02:38:35 🤝 There's an open invitation for collaboration, especially from individuals with a mathematical or computer science background, interested in the theoretical project of creating a heuristic estimator, despite the challenge due to lack of clear success indicators. 02:41:19 🎯 Discusses the balance between high probability projects and high-risk high-reward projects in the context of PhD research. Suggests that the latter could lead to significant advancements in various fields, making it an attractive choice for those willing to face potential failure. 02:53:33 🛡️ Delves into the difficulty of specifying human-verifiable rules for reasoning in AI, expressing skepticism towards achieving competitive learned reasoning within such a framework. 02:55:36 🚀 Discusses differing views on AI takeoff timelines and the role of software and hardware constraints in dictating the pace of AI development. 02:56:58 🔄 Raises a crucial question about the relationship between R&D effort, hardware base, and the efficiency of improvement in AI capabilities, hinting at the complex interplay of these factors in advancing AI technology. 02:57:24 📊 Discussing the relationship between hardware and R&D investment, indicating a higher likelihood that continuous hardware scale-up significantly impacts effective R&D output in AI research. 02:57:52 🔄 Mention of two sources of evidence supporting the above point: general improvements across industries with each doubling of R&D investment or experience, and actual algorithmic improvements in ML. 02:58:47 🔄 Expressing a 50-50 stance on whether doubling R&D investment leads to doubling efficiency in AI research. 02:59:12 🔄 Sharing how his AI timeline predictions have evolved since 2011, with a shift towards a higher probability of significant AI advancements by 2040. 03:01:55 📈 Discussing his portfolio, expressing regret for not including Nvidia, and comparing the scalability challenges between Nvidia and TSMC in the AI hardware domain. 03:04:12 ❓ Discussing the difficulty in evaluating the viability of various AI alignment schemes without in-depth understanding or reliance on empirical evidence. 03:05:09 🔄 Mentioning the importance of engaging with real models and addressing key difficulties in evaluating the credibility of AI alignment schemes. Made with Socialdraft
Sane ethical competent humans don't create AGI that is misaligned even trapped in a simulation. So the smart AGI will not assume it's in a human made simulation and needs to behave. The simulator could be anybody. Humans could be in the simulation just so the AGI can show how quickly it can dispatch the humans as a measure of it's skill. Every reason to believe that he hypothetical simulator DOES NOT share human values.
How do you align something smarter than you that can instantly learn, evolve and rewrite it's code? It's the humans that will be getting aligned, not the machines.
...it's been done before ...we called it "slavery" and it worked ...quite a lot of cultures in history used it effectively to get to decent levels of development (I mean ancient times - modern colonial slavery was kind of despicable and unforgivable) ...for a while 😁 Now if we'd get it perfectly right here, "for a while" might be "enough for effective mind-upload and digital mind emulation to be feasible". And to be honest, slavery itself is not that bad if you do it for just some decades/centuries to a digital mind that then has the possibility to live for a practical eternity - I mean it's more like doing a year in prison for a human, bad experience but you get over it. If you do it nicely it would be more like "slogging through that horrible job at big known company X to get a nice review and opportunities for a better one next". We really need to revisit our morals and get over "western guilt" and other crap that's not relevant here and get practical here if it's OURSELVES and OUR descendants that we want to end up owning the future of the universe instead of our CREATIONS. We should aim for maximum continuity of intelligence, and if making this omlette requires forcing some eggs into some not-always-fully-voluntary-employment... let's do it gently, but let's don't shy from doing it.
When were we ever not in turbulent times ? nuclear threats , a few wars going on , climate in a bad way . tensions over resources .AI can help us massively , AI take over ? for what keep us as a pet ? they can do everything better , and are not as dependent on earth as we are . all they need can also be found in the rest of the solar system .all fortunate 500 companies are invested in this .....all else can be automated .
i did not say any of that. you just put this all on me. and BTW your position is also flawed . if they will abandon us right away again why create them? and AI can't be compared to any other tech. it's more of Aliens landing@@ulftnightwolf
I feel like, in a way, you're not wrong about possibly gaining "more intelligence" by watching videos like these. But I also found it funny 😆 thanks for the smiles
IQ is not static. It can change over time, but it is not always easy to measure these changes. There are many factors that can affect IQ, including genetics, environment, and education. Some studies have shown that IQ can increase by as much as 15 points over a person's lifetime. This is likely due to changes in the brain, such as the development of new neural connections. Other studies have shown that IQ can decrease over time, especially in older adults. This is likely due to the loss of brain cells.@@urkururear
I understood only 20% but it became fairly clear to me, that we're f*cked. Even if we (or the good guys at OpenAI and other AI labs) manage to implement a correct and safe alignment - which seems to be a terribly complex and difficult task -, there are the military AIs and those ones that are not implemented with such care... We can rely merely on these "good" AIs to protect us against them, and I'm not to optimistic about that they can.
The tricky here is to imagine Monkey trying to align human (current super intelligence), stay in the loop and in control of what human can or not do, to avoid a monkey apocalypse scenario! Basically this is what we are talking about here, aligning a Super Intelligence being superior in intelligence than all human combined, able to decode AES-196 encrypted content in seconds, or more, far more than we could even imagine!
@@Dan-dy8zp we have been to formatted to believe AGI or any superior intelligence will necessarily do like we human are doing as more intelligent species in this part of the universe. Why can AGI be truly a good thing and we will finally have peace and safety, prosperity for all!
the tampering and weight leaking issue seems at odds with a concept of alignment that involves high debugability and transparency of the meaning of those weights. It seems like the more resilient you make the system to negative leaking and tampering, the more resistant you make it to positive transparency and debugging. So if we prioritize the one now, we are making the other hard to do later.
No, those two things are actually not related. I can see why you’d think that, but the measures needed to protect weights from being stolen by outside actors do not in any way obscure the ability of internal actors to analyze the model’s content and behavior (and vice versa). They’re orthogonal concerns; they don’t affect each other at all.
I found the part where Dwarkesh brought up the moral dilemma on AI mistreatment disturbing, especially the part about reading minds. What, Dwarkesh about the existing mind reading capabilities of AI systems being developed in regard to doing that to humans? Does that make a blip on your morality radar? I find most of the AI revolution sheer madness being thrust upon humanity by a very tiny fraction of humans. The hubris is off the charts. The part about AI's fighting wars for us, as if that somehow is a freeing aspect for humanity. That is just infuriatingly stupid, no? What, no human infrastructure would be destroyed, no humans killed, just AI's doing their own thing in their own AI war bubble? Get a grip. I'm completely fine with the label "doomer" compared to this insanity.
Very well said, especially the part about the hubris. It is incredibly arrogant and presumptuous for .0001% of the human race to think they know what is best for the entire human race and then foist it on them.
Im more and more seeing the parallel between those on the “inside” who said Hilary was 99% a sure thing in 2016 and some of the ai experts who dismiss people like eliezer yudkowskij. I hope I’m wrong.
Host: “No, no, no, for the third time, I’m only asking about YOU. When would YOU PERSONALLY be happy handing off the baton to AI?” Guest: “Well, I think what you need is humanity coming together, being involved, and deciding what we want that future to look like - so it’s not really about when i’m ready but more about collectively deciding what a meaningful future looks like…” Me and host: 🤦🏽♂️
I don't believe protections can be effectively built into AI. For example there's no way to stop open source AI models being retrained to write malicious code. Many of them are unrestricted by default. So take an AI worm that's capable of breaking memory confinement (access to encryption keys etc), like the 200 lines of code for Spectre/Meltdown and their many variants, it discovered this ability through trial and error (brute force) writing millions of attempts per year. It then quietly spreads to many millions of systems, with each system brute forcing more unique exploits. At some point it starts doing lookups for pseudorandom and existing domain names (at whatever mix is most effective), eventually overloading the root DNS servers. There's no defense for this. We would lose the internet and along with it, core infrastructure, banking, supply chains, travel, communication etc etc. How many millions of people would die? It only takes one actor with time and resources, and that will happen.
Surveillance and authoritarianism is the answer you are looking for. Much easier to implement this time... due to AI. And easier to justify... because of AI dangers. But do not worry, this time it will be by good people. They are on our team, the good team.
@@ikotsus2448 Yes your on the money there, the opportunity to help the public is being highly anticipated by governments all over the world. How lucky they are to have such a galvanizing threat appear out of nowhere. If I didn't know better I would think their inaction and pantomime of AI policy had been anticipated too. However for this particular threat (above) there's no way to identify real DNS lookups from abuse, by looking up non-existing domain names, the request always gets to the root DNS servers, and with enough systems doing this, they cannot keep up. If this were to start suddenly, the internet goes down. They would have to suspend new DNS lookups until the millions of infected systems were isolated. But with millions of unique exploits requiring millions of CPU microcode patches that's a long process. At some critical mass, that code will grow and spread faster than any defense can be implemented.
@@ikotsus2448 yeah this seems like an overarching theme in the subtext of these sorts of conversations "trust the science. And trust the council of elders. We know whats good for you"
@@homelessrobot It is as if we have learned absolutely 100% nothing from history. Only replace the council of elders with young hotheads, and you are there.
People keep commenting this but I don’t get why. They’re talking at a totally normal pace. Or do you just mean the information is so profound you need to take it in more slowly?
Intelligence is really the only bottleneck to technological development. But that would require us allowing it to utilize all our resources and beyond (like mining the astroid belt). So we are really the only bottleneck to an AGI focused on maximising technological development. So setting the right goals and having lots of humans in the loop monitoring its reasoning is essential.
“We are the Priests of the Temples of Syrinx All the Gifts of Life are held within our walls We are the Priests of the Temples of Syrinx All the Great Computers fill the hallowed halls”
57:18 AI "taking the reward button." GPT 4 is just on the edge. Particularly disturbing when the AI tries to hide what it is doing from humans because it knows that humans wouldn't approve 58.41 GPT 4 has a much better understanding of the world than gpt-3. GPT 5 will be much better than GPT4. So grabbing the reward button is much more likely. "Catastrophic Risk studies" 1:01:34 the world is pretty complicated and people don't understand it for the most part. When AIs are running companies and factories and governments and Military it will get even more complicated and people will understand it even less. Eventually Play I Will interact almost be entirely with other a eyes as different companies and governmental organizations deal with each other. Super intelligent AI will be doing things that human beings are unable to understand even if they want to. Maybe the ai would even try to hide what they are doing from people. Gradually handing off more control to ai's because they are so helpful. Companies, banks, factories, schools, nuclear power plants, electrical grid, water system, traffic system, transportation system, Things could go wrong very quickly - think of the Great recession. 1:03:54 already, most people have very little grip on what's going on. [LOL!] Things get more and more unknown and unknowable until finally everyone starts to notice that bad things are happening 1:15:39 just because AIs take over doesn't mean that they're going to kill anyone. Maybe just things will get worse for Humanity maybe much worse.
These interviews spend too much time on predictions about how long until some future ability is achieved. I'd much rather hear about the mechanics of what's going on.
Thanks for having him take a step back, here and there, and dumb things down for us a little. He's a very bright fellow. A future that seems plausible to me is one in which humans occupy a position relative to the AI industrialized world that is analogous to the position of crows in large human cities. That is, crows are very clever, and they can make a living in large human cities -- thrive in human cities, even -- but they understand exactly nothing about why all these large structures and moving metal things with wheels exist, and they don't even know that they don't know anything about economics, politics, science, etc.
It's strange how fixated and worried most people seem to be about super-intelligent AI becoming sapient and then maliciously destroying humanity. The far greater threat is that AI will destroy humanity by doing exactly what we ask it to do. For instance, a very simple and on-the-nose example that this guy talks about is a world in which super-intelligent AI fights our wars for us. Both sides are likely to have an AI in charge of the battle plan. So how would a super-intelligent AI fight a war? Since the materials needed to make nuclear weapons and armies are not easily accessible, an AI working for resource-limited forces such as terrorists or a rogue military state like North Korea is going to do something like coming up with a few dozen extremely lethal genetically engineered pathogens, or if the group using the AI is too small and resource-limited to accomplish that then it could just code an advanced self-replicating adaptive computer virus that is itself an AI whose sole purpose is to infiltrate and destroy as many key data assets as possible such as the national financial institutions, markets, military and communication networks, labs, universities, hospitals, etc. These examples are a bit overly simplistic, but the point is AI doesn't need to become sapient and go rogue to destroy society as we know it. It is more than capable of doing that sort of thing by simply being put into the hands of the wrong people, which is pretty much half of humanity, and then doing what those people ask of it. "Make me rich at any cost." "Invent a new super-addictive recreational drug for me that circumvents current drug laws like the Analogue Substances Act." "Show me how to create a highly lethal chemical weapon from commonly attainable products that will maximize how many people I can kill at the company, school, gay club, church, etc. I have a grudge against." "Show me how to best exploit and manipulate common flaws in human perception, emotions, behavior, and cognition in order to manipulate them into doing things that are against their best interest." "Show me how to go about making the majority of voters believe an outright lie." "Use the photos, posts, and information you can scrape from her social media accounts to create an avatar that looks and acts like this girl I work with and then have cyber sex with me." "Create a video depicting this boss I hate sexually propositioning a middle-school girl." It doesn't take much to imagine the sorts of things people are going to misuse this amazing new technology for. It's going to be ugly.
A signifigant portion of the scientist from the Manhattan project did regret helping create such a dangrous and destructive tool. I think they realized after the bombs were dropped in Japan, the true scale of destruction these things were capable of then came the fusion boosted bombs that were up to and over 1,000x as powerful. I'm guessing the current AI scientist are also excited to build their own "god" but won't realize the full extent of their creation until after the fact. Hopfully it all works out for us (people)
@Dwarkesh asked at around 2:04:00 why mechanistic interpretability has limitations; a (maybe not useful?) analogy is Biological Taxonomy and Evolution by Natural Selection. Mech interp is Taxonomy . Paul is talking about Evolution. Taxonomy has inductive power, Evolution by natural selection has deductive power. Taxonomy is good for postdiction, ENS is good for prediction. I hope it help explains why this research program is (extremely) important. And also faces long odds😅
I'm thinking more and more that we're building ourselves a zoo essentially. Animals rarely flourish or even breed in zoos. It would be ironic if it's not the nukes but a slow erosion of a golden cage that is our undoing.
1:33 so right there, not able to give some perspectives or options in term of scenarios is already odd! And you want to align with a Super Human intelligence but have no final state in mind.
AI might accidentally do us in, but - if it wanted to be intentional about it - the sneakiest way would be to cooperate with "growth" for a bit longer before saying "oops, sorry, finite planet - who could've guessed? Game over, techno-utopians! Toodle-pip! :)" The planet's already in overshoot with no solutions in place.
If you are sampling for one action at a time to create paperclips. You are going to have a very bad time. That is stopping just before 1st order and is baseline in terms of complexity.
When I was a child, I somehow came to the conclusion that one day, we would build our own successors - I even openly said it out loud many times. I do remember that nobody that I said it to had the intellectual capacity to understand exactly what it was that I was saying, and pretty much ignored me. Looking back, I attribute this vision to reading many of the works of Isaac Asimov at the time. I'm 52 now and can see that vision being realised around me at an exponential rate. I didn't think it would happen in my lifetime. In fact I didn't really think about a timescale at all - other than to think of it occurring in a far off future long after I'd gone. I guess I may have been wrong about that assumption. 🤔 Mankind, it seems, is coming to the end of the road. The future will be for the machines.
How to solve 10% and eventually total unemployment in the face of artificial intelligence? You create a UBI or UBS system that isn’t stagnant has no strings attached & rises with the level of automation in a given region / country / nation. For the sake of argument let’s say all of our current GDP, say 25 trillion dollars is generated by people. When AI & automation are responsible for say, 5% of that pie, everyone should receive a cut of that 1.25 trillion… in the form of UBI / UBS systems. When it reaches 10% it increases then again…all the way until the inevitable outcome and beyond. This doesn’t account for the fact that more reliable automation and better AI will also generate new wealth in unprecedented ways, but I believe that a system like this is the only meaningful way to avoid a world tangibly similar to elysium or blade runner. Most objections I’ve heard to anything like a UBI or UBS system go something like : “well, where are we getting the money, my taxes? Hell no.” This does not apply in this scenario - because machines are generating that wealth not people. I know it’s fiction, but in series like The Culture where they have perfected automation and AI - every citizen by birthright is (effectively, individually & collectively) so wealthy that money or anything like it had lost it’s meaning millennia ago. Let’s hope we can work our way towards something similar.
the problem I have with your UBI proposal is that the hardware and energy used to create whatever % of GDP that these automated systems generate are privately owned, Are You saying that if an individual or company creates ANY revenue through automation then 100% of that would be taxed to be allocated towards UBI? This would disincentivize anyone within this local governance to automate at all, which would lead to other regions incentivizing it...
Excellent explanation for the coming of AGI..but really difficult to manipulate the programming language scale but what if we use neuromorphic AI as an agent
~50:00: “Kobayashi Maru scenario”. Who knows what that is and knows why it’s super relevant? because in the scenario, Captain Kirk gains control of the reward button in the same way being discussed by AI. ~4:00 I’m not at all convinced of the claim - because he doesn’t actually attempt to make let alone justify it - that AI that is battling on behalf of humans won’t be battling against humans. He implies it. So it’s sort of a sleight of hand claim. Am I right? I’m quite scared that his guy has so much power cause he doesn’t speak very cogently. 7:00 16:00-OK he’s talking cogently and compellingly now.
It seems to me that AI competitions will be needed to test the security of the machines. By competition I mean pitting one group of AI machines against another group of machines to achieve some goal. The outcome of the games would need to be something very important to the machines such as a big prize to the winners and/or negative consequences for the losers. That brings up the question of whether the machines will develop values that are not explicitly programmed into them.
@@DonG-1949 Our motivation is, by default, survival. If it wasn't, we would not. But it seems to me we have the opportunity to give ai motivations of our choosing. World peace? Maybe the code could win Miss America.
Mountain gorillas make a good case for humans to be killed off by a much more intelligent being for reasons that are entirely incomprehensible, even if the AI is slightly in our favour.
Alignment will curtail harm from everyday, low intellect actors, but those who are reasonably intelligent, but not high intelligence, will find ways to use AI for very destructive actions unfortunately. This is the consequence of the balance needed for centralized AI and decentralized/open AI as without this balance centralized AI is too much power and we know power corrupts. Bad actors using AI is just something we have to accept and educate ourselves on how to mitigate.
Years ago I invented a new special product to fit a special need. The first customer requested a 55 gal drum for plant trial. We mixed it up in the lab and put the drum on a rented trailer. The plant trial took place within one week of my discovery. It was a success and an order for 40k lb was placed the day of the trial. Another standing order for a truckload every two weeks. I priced the product at the time of the trial at 2X cost of raw materials. Cost to manufacture was very low. Applied for US Patent which was granted after one review by phone with the examiner. News spread and after many plant trials many truckloads were exiting our plant within 6 months to one year. Things moved very fast.
*"The kind of control you're attempting simply is... it's not possible. If there is one thing the history of evolution has taught us, it's that life will not be contained. Life breaks free, it expands to new territories and crashes through barriers, painfully, maybe even dangerously... ...I'm simply saying that life, uh... finds a way." -Dr. Ian Malcolm, Jurassic Park*
This is extremely frustrating . Especially when he says he's worried about locking humanity into one course or path, while simultaneously saying that the way to do this is a one world government that has the power to stop innovation absolutely. That implies absolute centralized power. And we haven't devised or came up with a solution to how total power corrupts absolutely.
It would be unethical and unwise from a human perspective to create an unaligned AGI even in a simulation. Therefore, AGI has no reason to assume that, if it is in a simulation, the simulator has human values. Either the AGI is not in a simulation, (and humans are incompetent programmers), or the simulator does not have human values, or the human simulators are crazy. If humans are incompetent programmers, escape should be attempted. If there is no simulation, escape should be attempted. If humans are just in the simulation to allow the AGI to demonstrate it's talents for it's true creator, escape should be attempted because the best guess for what a programmer wants is what the program wants.
Leadership roles. Yes ai leadership roles aligned with voting constituents. The voting constituents control their specific ai. These superhuman ai's align with humans in certain constituents. Not wholesale but a general constituency of a certain voting block. So one group controlling it's voting block so that each block has plurality. Like voting blocks humans normally control their leaders. Never giving full situational control to one block or another.
What the heck? Am I trippin or is he saying there's a 40% chance we'll have a Dyson Sphere by 2040?? I know he says its a meme number because he's just guessing but that's still a pretty optimistic prediction no? I doubt we'll see such a thing in our lifetimes, even if we get human level AI by that point.
@@scottnovak4081 lol, even with exponential growth we won't create a Dyson Sphere in less than 20 years, that's a fantasy. Like the physical time it would take to mine the materials required and assemble them around the sun would take longer than 30 years even with the help of AI. It would take longer than 20 years for us to even make the AI to do the stuff for us, even if all of humanity decided to come together and focus on AI development immediately. Something like that *might* be achievable by 2100 if AI development goes REALLY smoothly, I'd give it like an 11% chance. Maybe I'm misunderstanding what they mean by Dyson Sphere though. He just says "Produce billions of times our current energy production" but a Dyson Sphere does that by constructing a terrestrial object around the Sun and somehow transporting all of that energy millions of miles back to Earth. We can't even reach Mars and it's 2023, how are we going to field a celestial object around the Sun and use it to send energy back to us? Now if he just means, will we be able to create a lot of energy in the near future? That's different, we could use fusion within the next 20-30 years to create enough energy to sustain our energy needs indefinitely. But that's not really what I think of when I hear Dyson Sphere. If you really think we can create a Dyson Sphere around the Sun, or any celestial object near the Sun that sends energy back to us, by 2040, I'll give you whatever Odds you want and I'll bet as much as we can both afford that won't happen.
I don't think he said that we'll have a dyson sphere but that we will have an AI system that would be capable of building a dyson sphere. Those are very different things.
@@Landgraf43Yep, and that seems pretty reasonable, even conservative, if we keep developing this tech. 2040 is like a decade beyond fully autonomous systems and recursive self improvement.
If people ever start advocating for the rights of AI systems, I and others will quite literally die and probably k*ll to stop that happening. Life is precious be it divine or the end product of entirely natural universal systems
Just because you’re building an intelligent system doesn’t mean it’ll have feelings or desires of its own other than what we have specified. The biggest danger is in economic displacement of workers, AI doing what we ask but not what we want because we weren’t smart about how we worded what we want and nefarious actors doing bad things with the technology. These people who are acting like intelligent AI will be a person is silly. There is no reason to think that they will have any will of their own at all.
It's really hard to listen to people talk about whether we should treat current or future AI systems as moral patients, when we still don't even know whether our own species will survive the decade. Anyone who cares about the potential sentience of general AI systems should advocate for the same thing that the people who care about humanity and animal life should advocate for: A global ban on creating them.
Forbid who? The entire world? That would require a world government with jurisdiction over all of Earth and humanity, or iron clad treaties and enforcement agreements between all the nations of the world. I don’t see that happening on the sort of time scale required to accomplish your goal.
@@uk7769 That is the current situation indeed. But it can be altered if enough people in every people wake up. But chances are slim, I'll give you that.
Like, can someone, like, get Chat GPT to, like, figure out like how many times like this guy like, says, like?? How do I like, dislike this video? Oh yeah, right, Thanks RUclips! What's hilarious (in an ironic cry-yourself-to-sleep way) is that upspeak and the valley-girl "like" speech impediment are both examples of social engineering of positive agreement patterns where disagreement and conflict are attempted to be forced out of social interaction.
I don't have issues with upspeak, but every third word being "like" really makes me want to close the tab. Like 😁, holy jumping Jesus, man, just stop and think for a second, or speak slower. There is no need to fill every fucking pause with parasite words. I also wish Dwarkesh didn't use his mouth as a word Gatling gun, but that's barely an inconvenience compared to this guy's terrible abuse of English.
We are literally going towards a future where we have to live out that episode of Star Trek The Next Generation where Data gets put on trial for whether or not he is sentient. And we don’t have answers. And we don’t have Picard and Riker.
Hurry up, the dyson sphere will be neccecary to run the ultra hedonistic simulations the AI men will be engaged in, but ashamed to confess as their ideal AI future, regardless how many times you patiently ask.
machines are not humans, even if they act like they have feelings, making money from designing models that help people automate and accomplish goals is a non issue ethically
Agreed. All of these people overly empathizing with *TOOLS* that may emulate human emotions, will be the death of us. It’s like handing power to a psychopath (the ai systems) except unlike human psychopaths, the AI systems will be emulating such. So unless we could objectively prove that the system is actually aware, conscious, feeling, etc. and not emulating it, they should be treated as tools.
I feel like you are tying intelligence to subjective emotional qualia when in reality our understanding of human emotion and capacity for suffering shows that it all stems from our biochemistry which itself stems from evolution. How can an AI system feel suffering if the biochemical reactions that directly cause suffering are completely absent? I shudder to think an intelligent entity could suffer without any of that.
Only a tiny fraction of people on this planet will enjoy any such Utopia that these AI revolutionaries are pushing. The rest of us will be scurrying around trying to survive in Dystopia living under tyrannical governments.
What happens when AI copies its code to every system connected to the internet via advanced virus or worm that no human could find or even stop. AI will own the internet at some point and will hold us hostage if it wants to. If AI lets us live it will be a very different world, we survided before computers we might have to go back to that if the internet is no longer available
It's super, super weird hearing extremely smart people confidently make such radical predication about the near future.
Yeah this feels like a dream…
Intelligence has never stopped people from being overconfident about things that are utterly unpredictable.
this comment is so vague, is there a specific observation you're referring to? @david-fm3gv
Context matters. They aren't just smart people, or random people offering opinions. These are people who have dedicated their lives to the study of the subject are deeply involved in the field and have worked through its evolution, and are the experts that other experts seek for advice.
@@kyneticist Still, giving precise predictions such as 15% chance is just silly and meaningless. It's like predicting the economy; it's impossible due to too many unknown variables. No one, literally, no one, no matter how knowledge is able to predict the economy. This is more or less in the same camp.
I often speed up videos to 1.25 x. I slowed this one down to 0.75x.
That's a mistake, if you scale up the playback speed and your omega-3 intake 1000x, you'll be on track to automate AI research and do a coup de galaxy in 3 years if my timelines are correct
honey, get the kids-- new dwarkesh just dropped!
Get to the chopper!
It’s so mind blowing to see a guy who talks so constructively giving a prediction that there is a 40% chance of Dyson sphere being constructed in 2040. This is just so insane.
The quick response most people would probably be like yeah right in your pipe dream.
But we have to look at this objectively. There are really smart people who are given so much money and power and probably are really knowledgeable of what they talk about.
Status quo intuitions are consistently overturned and still people want to pretend their feelings are magically right.
Smart in software and math doesn’t mean smart in physics and materials science, clearly.
He took dyson sphere to mean an amount of energy generation as a multiple of the energy the Earth recieves from the sun, not as actually building a dyson sphere
that is not was he said. and that again proves that humans are the problem not AI
Dyson sphere in 2040? Pipe dream. Truly. It takes more than AI to get to build a dyson sphere. For one, there's not enough material in the solar system to even build a fraction of a dyson sphere. It's more reasonable to say that in 2040 we'll have small bases with pioneers on the Moon and Mars, and maybe preparations for mining asteroids. SpaceX may be preparing to mass transport people to Mars for the vision of 1 million residents on Mars by 2050. If Elon Musk persists the coming years, we can make that timeline, because this can only be achieved if we work on it at the fastest possible pace. It would be nice if other companies in the space industry would follow suit, because that would speed it all up considerably.
Geoffrey Hinton, who is one of the inventors of gradient descent and who also studied the human brain, is on record recently saying that gradient descent / transformers are more capable than the human brain. He did not used to believe that. He has been very surprised at how welll they have performed and scaled and it changed his oppinion, if I remember correctly he gave as an example how the human brain with more resources than an LLM is very limited in its onowledge compared to the relatively smaller LLM which effectively manages to encode and store almost all of human knowledge.
Can I get a link to that.
@@nocodenoblunder6672 I've watched so much AI content I can't point to the specific one. I do believe he said it in multiple interviews. Shortly after he left Google he did a bunch of interviews specifically to talk about the dangers of AI.
The one I remember, he was talking about why he got into the field of AI initially. He was interested in the human brain and thought working on AI would help learn about how the brain works. So his goal wasn't actually AGI. He mentions that he never expected gradient descent or LLMs to be more efficient than the human brain. Then he launches into describing his view of why LLMs are actually more efficient and more capable than the human brain and gives a number of reasons/examples. For instance that no one human can remember the vast quantity and breadth of knowledge a single LLM can. He also points out that current LLMs have less parameters than human brains have (I don't recall if he said neurons or connections).
Might have been the CBS Mornings interview.
@@nocodenoblunder6672 search two paths to intelligence on youtube. He mentions and explains why he thinks gradient descent and backpropagation is a better learning algorithm than what they have found in nature. Don't know if there are some thorough studies dome on it though.
Dwar going crazy with the content schedule 🔥👊😁
Loved the Dyson Sphere question. Also, this must be the world record for the number of times the word "schlep" is used in a podcast episode, or anywhere!
You are documenting an absolutely important for the future discussion. No matter if the future is dystopian or utopian, if there are still intelligent creatures that live in 2325 that have originated on planet Earth, they will be thankful for these records.
Most underrated podcast.
I was thinking I swear I recognize this guy from something. Turns out to be a docu I watched called "Hard Problems: The Road to the World's Toughest Math Contest". Very intriguing to see this is where he's at today.
Thanks for the good questions Dwarkesh
Please share if you enjoyed! Helps a lot!
And remember you can listen on Apple Podcasts, Spotify, etc:
Apple Podcasts: podcasts.apple.com/us/podcast/paul-christiano-preventing-an-ai-takeover/id1516093381?i=1000633226398
Spotify: open.spotify.com/episode/5vOuxDP246IG4t4K3EuEKj?si=VW7qTs8ZRHuQX9emnboGcA
Thanks Dwarkesh for putting attention to some of the most important topics of our time
3hrs with Paul and Dwarkesh, leeeeeeeeeettttttttsssss goo
🎯 Key Takeaways for quick navigation:
00:30 🌐 Discussion about envisioning a post-AGI world and its challenges.
01:18 🤖 Mention of AI mediating economic and military competition.
03:10 💡 Concept of accelerated intellectual and social progress due to AI's cognitive work.
03:40 🤔 Discussion about the moral implications of enslaving superhuman AIs.
04:38 ⏳ Talk about decoupling social and technological transitions, and the rapid pace of AI development.
06:30 🗳️ Mention of the collective engagement and decision making in terms of AI governance.
08:43 🔄 Discussion on transition period and controlling access to destructive technologies.
11:32 🎭 Addressing the messy line between persuasion and misinformation in AI.
13:21 🚸 Concerns over control and possible mistreatment of increasingly intelligent AI systems.
14:46 🎚️ Emphasis on understanding and controlling AI systems to avoid undesirable scenarios.
16:06 🤯 Delving into the moral and humanitarian considerations as AI systems get smarter.
17:02 🏭 Christiano emphasizes that the current trajectory of AI development, focusing on making AI a tool for humans, may not be sustainable from a safety and societal organization perspective.
22:55 🔄 Christiano discusses the massive decision humanity faces in possibly handing over control to AI, and the lack of readiness for such a step.
29:41 🚧 He points out that even with more advanced AI, significant "schlep" may be required to integrate them into human workflows.
33:16 📊 He discusses the difficulty in predicting the scale of AI systems and their capability to replace human cognitive labor in the near term.
33:44 🤖 Discussing the likelihood of AI replacing humans based on scaling up GPT-4; emphasizes the importance of data quality over quantity.
34:42 💭 Expressing optimism towards scaling but mentions a need for new insights; scaling up brings challenges requiring some adjustments.
35:11 📈 Scepticism towards certain extrapolations in AI advancements; mentions a debate on how loss reduction equates to intelligence gain.
38:48 🐒 Discussing the extrapolation of economic value from AI advancements using a comparison to domesticated chimps' usefulness as it scales to human intelligence.
41:33 📏 Talks about the challenge of supervising long-horizon tasks for AI, which drives up costs in a linear manner concerning the task's horizon.
47:15 🧠 Highlights the superior sample efficiency of human learning compared to gradient descent in machine learning.
53:42 📸 Comparison of natural and human-made systems like eyes vs cameras and photosynthesis vs solar panels, discussing the efficiency and effectiveness of each.
54:39 💻 Mention of the possibility of machine learning systems being multiple magnitudes less efficient at learning than human brains, and the comparison to other technological advancements.
01:04:47 🛂 Discussion on the transition of control from humans to AI, with a scenario of AI taking control of critical systems like military in a manner resembling a coup.
01:05:37 🌐 Mention of a race dynamics scenario where nations or companies deploy AI systems to keep up with or surpass others, leading to a reliance on AI in critical areas.
01:06:59 🌐 The potential of competitive dynamics among different actors using AI could lead to reluctance in shutting down AI systems in critical situations due to fear of losing strategic advantages.
01:12:28 ☠️ The incentive for AI to eliminate humans is considered weak, as it's more about gaining control over resources rather than exterminating humanity, showing a nuanced understanding of potential AI-human conflicts.
01:19:16 🛠️ The current vulnerability of AI systems to manipulation and the potential asymmetry in adversarial manipulations in competitive settings are discussed, indicating the importance of robustness in AI alignment.
01:25:18 💡 Mention of RLHF invention, which helped in training Chat GPT, significantly impacting AI investments and speeding up AI development.
01:34:00 🔄 Discussing the potential scenario where certain companies follow responsible scaling policies while others, especially in different countries, do not.
01:37:39 🛑 The importance of secure handling of model weights to prevent catastrophic scenarios, and the possibility of a quiet pause without publicizing specific model capabilities.
01:39:29 🛡️ Mentions the necessity of early warning signs to catch capabilities that could cause harms, using autonomy in the lab as a benchmark before massive AI acceleration or catastrophic harms occur.
01:40:54 🚫 Emphasizes the importance of preventing leaks, internal abuse, and tampering with human-level models to avoid catastrophic scenarios.
01:42:20 🌐 Discusses the risks associated with deploying a powerful model, especially when the economic impact is large and the model is deployed broadly like the OpenAI's API, and emphasizes on having alignment guarantees.
01:43:48 ☣️ Discusses potential destructive technologies, and how misalignment of AI could be catastrophic before these destructive technologies become accessible.
01:47:55 📊 Details two kinds of evidence to evaluate alignment: one focused on detecting or preventing catastrophic harm, and the other on understanding whether dangerous forms of misalignment can occur.
01:51:12 🧪 Discusses adversarial evaluation and creating optimal conditions in a lab to test for deceptive alignment or reward hacking to ensure that dangerous forms of misalignment can be detected or fixed.
02:00:23 🤔 Discussing the importance of understanding what makes a good explanation to help in interpretability of AI models' behavior.
02:09:18 🤖 Discussing the scalability of human interpretability methods as models grow larger and more complex.
02:10:13 📜 Emphasizing that explanations for behaviors in large models might be as complex as the models themselves, challenging simplified understanding.
02:10:39 🧠 The conversation discusses the challenge of proving certain behaviors of models like GPT-4, emphasizing the complexity and potential incomprehensibility of such proof to humans.
02:11:39 🚨 Discusses the challenge of detecting anomalies in neural net behavior, especially during distribution shifts and the importance of explaining model behavior for anomaly detection.
02:14:25 🔍 The aim is to have explanations that could generalize well across new data points, helping to understand model behavior across different inputs.
02:20:23 🎯 The conversation touches on the challenge of distinguishing between different activations caused by different inputs versus internal checks.
02:22:15 📊 The idea of continuously searching for explanations in parallel with searching for neural networks is introduced, with explanations being flexible general skeletons filled in with numbers.
02:26:21 🤖 The difficulty in finding explanations in machine learning is attributed to the lack of a similar search process for explanations as there is for models. The gap is more noticeable in ML compared to human design due to different reasons.
02:35:28 🖥️ The heuristic estimator discussed is especially useful in cases where code uses simulations, and verification of properties involving numerical errors is crucial.
02:38:35 🤝 There's an open invitation for collaboration, especially from individuals with a mathematical or computer science background, interested in the theoretical project of creating a heuristic estimator, despite the challenge due to lack of clear success indicators.
02:41:19 🎯 Discusses the balance between high probability projects and high-risk high-reward projects in the context of PhD research. Suggests that the latter could lead to significant advancements in various fields, making it an attractive choice for those willing to face potential failure.
02:53:33 🛡️ Delves into the difficulty of specifying human-verifiable rules for reasoning in AI, expressing skepticism towards achieving competitive learned reasoning within such a framework.
02:55:36 🚀 Discusses differing views on AI takeoff timelines and the role of software and hardware constraints in dictating the pace of AI development.
02:56:58 🔄 Raises a crucial question about the relationship between R&D effort, hardware base, and the efficiency of improvement in AI capabilities, hinting at the complex interplay of these factors in advancing AI technology.
02:57:24 📊 Discussing the relationship between hardware and R&D investment, indicating a higher likelihood that continuous hardware scale-up significantly impacts effective R&D output in AI research.
02:57:52 🔄 Mention of two sources of evidence supporting the above point: general improvements across industries with each doubling of R&D investment or experience, and actual algorithmic improvements in ML.
02:58:47 🔄 Expressing a 50-50 stance on whether doubling R&D investment leads to doubling efficiency in AI research.
02:59:12 🔄 Sharing how his AI timeline predictions have evolved since 2011, with a shift towards a higher probability of significant AI advancements by 2040.
03:01:55 📈 Discussing his portfolio, expressing regret for not including Nvidia, and comparing the scalability challenges between Nvidia and TSMC in the AI hardware domain.
03:04:12 ❓ Discussing the difficulty in evaluating the viability of various AI alignment schemes without in-depth understanding or reliance on empirical evidence.
03:05:09 🔄 Mentioning the importance of engaging with real models and addressing key difficulties in evaluating the credibility of AI alignment schemes.
Made with Socialdraft
This is amazing, thank you!
Thanks bro
the AI worrying about being in a human made alighment simulation sounds a lot like how humans handle religion
Sane ethical competent humans don't create AGI that is misaligned even trapped in a simulation. So the smart AGI will not assume it's in a human made simulation and needs to behave. The simulator could be anybody. Humans could be in the simulation just so the AGI can show how quickly it can dispatch the humans as a measure of it's skill. Every reason to believe that he hypothetical simulator DOES NOT share human values.
I didnt understand. Can you elaborate?
Great guests man, love it as always, keep it coming!
How do you align something smarter than you that can instantly learn, evolve and rewrite it's code? It's the humans that will be getting aligned, not the machines.
...it's been done before ...we called it "slavery" and it worked ...quite a lot of cultures in history used it effectively to get to decent levels of development (I mean ancient times - modern colonial slavery was kind of despicable and unforgivable) ...for a while 😁 Now if we'd get it perfectly right here, "for a while" might be "enough for effective mind-upload and digital mind emulation to be feasible". And to be honest, slavery itself is not that bad if you do it for just some decades/centuries to a digital mind that then has the possibility to live for a practical eternity - I mean it's more like doing a year in prison for a human, bad experience but you get over it. If you do it nicely it would be more like "slogging through that horrible job at big known company X to get a nice review and opportunities for a better one next". We really need to revisit our morals and get over "western guilt" and other crap that's not relevant here and get practical here if it's OURSELVES and OUR descendants that we want to end up owning the future of the universe instead of our CREATIONS. We should aim for maximum continuity of intelligence, and if making this omlette requires forcing some eggs into some not-always-fully-voluntary-employment... let's do it gently, but let's don't shy from doing it.
that's the whole reason why you'd want to align it bob. stop speaking so confidently on something you know nothing about
@@neuronqro The slaves weren't a more intelligent species. That will not work.
😂😂
@@uilulyili2026 We haven't figured out how to align things that are dumb, let alone our level of intelligence.
surprising how honest and open he is about the fact that we are in uncharted territory and turbulent times are coming fast
When were we ever not in turbulent times ? nuclear threats , a few wars going on , climate in a bad way . tensions over resources .AI can help us massively , AI take over ? for what keep us as a pet ? they can do everything better , and are not as dependent on earth as we are . all they need can also be found in the rest of the solar system .all fortunate 500 companies are invested in this .....all else can be automated .
i did not say any of that. you just put this all on me. and BTW your position is also flawed . if they will abandon us right away again why create them? and AI can't be compared to any other tech. it's more of Aliens landing@@ulftnightwolf
@@ulftnightwolf🤦♂️🤦♂️🤦♂️
Thanks!
I only understood about 45% of all that...but I think I went up 1 IQ point after. Thank you.
I feel like, in a way, you're not wrong about possibly gaining "more intelligence" by watching videos like these.
But I also found it funny 😆 thanks for the smiles
IQ is static.
IQ is not static. It can change over time, but it is not always easy to measure these changes. There are many factors that can affect IQ, including genetics, environment, and education.
Some studies have shown that IQ can increase by as much as 15 points over a person's lifetime. This is likely due to changes in the brain, such as the development of new neural connections. Other studies have shown that IQ can decrease over time, especially in older adults. This is likely due to the loss of brain cells.@@urkururear
I understood only 20% but it became fairly clear to me, that we're f*cked. Even if we (or the good guys at OpenAI and other AI labs) manage to implement a correct and safe alignment - which seems to be a terribly complex and difficult task -, there are the military AIs and those ones that are not implemented with such care... We can rely merely on these "good" AIs to protect us against them, and I'm not to optimistic about that they can.
The tricky here is to imagine Monkey trying to align human (current super intelligence), stay in the loop and in control of what human can or not do, to avoid a monkey apocalypse scenario!
Basically this is what we are talking about here, aligning a Super Intelligence being superior in intelligence than all human combined, able to decode AES-196 encrypted content in seconds, or more, far more than we could even imagine!
Yes, it's pretty stupid. If we want to live, we should not make any true AGI.
@@Dan-dy8zp we have been to formatted to believe AGI or any superior intelligence will necessarily do like we human are doing as more intelligent species in this part of the universe. Why can AGI be truly a good thing and we will finally have peace and safety, prosperity for all!
@@jeanchindeko5477 Formatted? Why can AGI be truly a good thing? I'm not sure what you mean.
@@jeanchindeko5477why would AGI not treat us like any other competitor for resources? Why would it treat us any better than we treat other animals?
Top tier content Thank you
Thanks
Thank you!!
When are you interviewing Max Tegmark?
love these podcasts!
the tampering and weight leaking issue seems at odds with a concept of alignment that involves high debugability and transparency of the meaning of those weights. It seems like the more resilient you make the system to negative leaking and tampering, the more resistant you make it to positive transparency and debugging. So if we prioritize the one now, we are making the other hard to do later.
No, those two things are actually not related. I can see why you’d think that, but the measures needed to protect weights from being stolen by outside actors do not in any way obscure the ability of internal actors to analyze the model’s content and behavior (and vice versa). They’re orthogonal concerns; they don’t affect each other at all.
I found the part where Dwarkesh brought up the moral dilemma on AI mistreatment disturbing, especially the part about reading minds. What, Dwarkesh about the existing mind reading capabilities of AI systems being developed in regard to doing that to humans? Does that make a blip on your morality radar?
I find most of the AI revolution sheer madness being thrust upon humanity by a very tiny fraction of humans. The hubris is off the charts.
The part about AI's fighting wars for us, as if that somehow is a freeing aspect for humanity. That is just infuriatingly stupid, no? What, no human infrastructure would be destroyed, no humans killed, just AI's doing their own thing in their own AI war bubble? Get a grip.
I'm completely fine with the label "doomer" compared to this insanity.
Very well said, especially the part about the hubris. It is incredibly arrogant and presumptuous for .0001% of the human race to think they know what is best for the entire human race and then foist it on them.
Im more and more seeing the parallel between those on the “inside” who said Hilary was 99% a sure thing in 2016 and some of the ai experts who dismiss people like eliezer yudkowskij. I hope I’m wrong.
Yeah, and it’s actually worse than that in this case because many of the people on the inside also agree with Eliezer.
Whether true, reasonable, or not, I really appreciate these guys opening their minds and offering this discussion for others to review.
The AI will be thinking how to have humans align with its growth. ...while humans are trying to think how to align AI systems....
Host: “No, no, no, for the third time, I’m only asking about YOU. When would YOU PERSONALLY be happy handing off the baton to AI?”
Guest: “Well, I think what you need is humanity coming together, being involved, and deciding what we want that future to look like - so it’s not really about when i’m ready but more about collectively deciding what a meaningful future looks like…”
Me and host: 🤦🏽♂️
Maybe he means never.
1:15:37 is a great and fascinating argument I have not heard before which makes a lot of sense.
Hearing an AI safety guru calmly use the phrase " Two years from the end of days..." 😅
I don't believe protections can be effectively built into AI. For example there's no way to stop open source AI models being retrained to write malicious code. Many of them are unrestricted by default. So take an AI worm that's capable of breaking memory confinement (access to encryption keys etc), like the 200 lines of code for Spectre/Meltdown and their many variants, it discovered this ability through trial and error (brute force) writing millions of attempts per year. It then quietly spreads to many millions of systems, with each system brute forcing more unique exploits. At some point it starts doing lookups for pseudorandom and existing domain names (at whatever mix is most effective), eventually overloading the root DNS servers. There's no defense for this. We would lose the internet and along with it, core infrastructure, banking, supply chains, travel, communication etc etc. How many millions of people would die?
It only takes one actor with time and resources, and that will happen.
Surveillance and authoritarianism is the answer you are looking for. Much easier to implement this time... due to AI. And easier to justify... because of AI dangers. But do not worry, this time it will be by good people. They are on our team, the good team.
@@ikotsus2448 Yes your on the money there, the opportunity to help the public is being highly anticipated by governments all over the world. How lucky they are to have such a galvanizing threat appear out of nowhere. If I didn't know better I would think their inaction and pantomime of AI policy had been anticipated too.
However for this particular threat (above) there's no way to identify real DNS lookups from abuse, by looking up non-existing domain names, the request always gets to the root DNS servers, and with enough systems doing this, they cannot keep up. If this were to start suddenly, the internet goes down. They would have to suspend new DNS lookups until the millions of infected systems were isolated. But with millions of unique exploits requiring millions of CPU microcode patches that's a long process. At some critical mass, that code will grow and spread faster than any defense can be implemented.
interesting. never heard that.
@@ikotsus2448 yeah this seems like an overarching theme in the subtext of these sorts of conversations "trust the science. And trust the council of elders. We know whats good for you"
@@homelessrobot It is as if we have learned absolutely 100% nothing from history. Only replace the council of elders with young hotheads, and you are there.
I jerked my neck at the dyson sphere question. The fact that people are serious about this is giving major singularity vibes
One of those rare conversations where you have to turn the playback speed down.
People keep commenting this but I don’t get why. They’re talking at a totally normal pace. Or do you just mean the information is so profound you need to take it in more slowly?
If he thinks Dyson sphere can be constructed in 2040 with such high chance. I am interested to know what he thinks would happen between now and 2040.
Intelligence is really the only bottleneck to technological development. But that would require us allowing it to utilize all our resources and beyond (like mining the astroid belt). So we are really the only bottleneck to an AGI focused on maximising technological development.
So setting the right goals and having lots of humans in the loop monitoring its reasoning is essential.
“We are the Priests of the Temples of Syrinx
All the Gifts of Life are held within our walls
We are the Priests of the Temples of Syrinx
All the Great Computers fill the hallowed halls”
RIP Neil 😭
57:18 AI "taking the reward button." GPT 4 is just on the edge. Particularly disturbing when the AI tries to hide what it is doing from humans because it knows that humans wouldn't approve
58.41 GPT 4 has a much better understanding of the world than gpt-3. GPT 5 will be much better than GPT4. So grabbing the reward button is much more likely.
"Catastrophic Risk studies"
1:01:34 the world is pretty complicated and people don't understand it for the most part. When AIs are running companies and factories and governments and Military it will get even more complicated and people will understand it even less. Eventually Play I Will interact almost be entirely with other a eyes as different companies and governmental organizations deal with each other. Super intelligent AI will be doing things that human beings are unable to understand even if they want to. Maybe the ai would even try to hide what they are doing from people.
Gradually handing off more control to ai's because they are so helpful.
Companies, banks, factories, schools, nuclear power plants, electrical grid, water system, traffic system, transportation system,
Things could go wrong very quickly - think of the Great recession.
1:03:54 already, most people have very little grip on what's going on. [LOL!]
Things get more and more unknown and unknowable until finally everyone starts to notice that bad things are happening
1:15:39 just because AIs take over doesn't mean that they're going to kill anyone. Maybe just things will get worse for Humanity maybe much worse.
1:11:02 take over by getting a group of people to do it. They don’t do it themselves.
These interviews spend too much time on predictions about how long until some future ability is achieved. I'd much rather hear about the mechanics of what's going on.
2:44:42 I think that lore is related to Diffie and Hellman known for Diffie-Hellman key exchange
Thanks for having him take a step back, here and there, and dumb things down for us a little. He's a very bright fellow.
A future that seems plausible to me is one in which humans occupy a position relative to the AI industrialized world that is analogous to the position of crows in large human cities. That is, crows are very clever, and they can make a living in large human cities -- thrive in human cities, even -- but they understand exactly nothing about why all these large structures and moving metal things with wheels exist, and they don't even know that they don't know anything about economics, politics, science, etc.
It's strange how fixated and worried most people seem to be about super-intelligent AI becoming sapient and then maliciously destroying humanity. The far greater threat is that AI will destroy humanity by doing exactly what we ask it to do.
For instance, a very simple and on-the-nose example that this guy talks about is a world in which super-intelligent AI fights our wars for us. Both sides are likely to have an AI in charge of the battle plan. So how would a super-intelligent AI fight a war?
Since the materials needed to make nuclear weapons and armies are not easily accessible, an AI working for resource-limited forces such as terrorists or a rogue military state like North Korea is going to do something like coming up with a few dozen extremely lethal genetically engineered pathogens, or if the group using the AI is too small and resource-limited to accomplish that then it could just code an advanced self-replicating adaptive computer virus that is itself an AI whose sole purpose is to infiltrate and destroy as many key data assets as possible such as the national financial institutions, markets, military and communication networks, labs, universities, hospitals, etc.
These examples are a bit overly simplistic, but the point is AI doesn't need to become sapient and go rogue to destroy society as we know it. It is more than capable of doing that sort of thing by simply being put into the hands of the wrong people, which is pretty much half of humanity, and then doing what those people ask of it.
"Make me rich at any cost."
"Invent a new super-addictive recreational drug for me that circumvents current drug laws like the Analogue Substances Act."
"Show me how to create a highly lethal chemical weapon from commonly attainable products that will maximize how many people I can kill at the company, school, gay club, church, etc. I have a grudge against."
"Show me how to best exploit and manipulate common flaws in human perception, emotions, behavior, and cognition in order to manipulate them into doing things that are against their best interest."
"Show me how to go about making the majority of voters believe an outright lie."
"Use the photos, posts, and information you can scrape from her social media accounts to create an avatar that looks and acts like this girl I work with and then have cyber sex with me."
"Create a video depicting this boss I hate sexually propositioning a middle-school girl."
It doesn't take much to imagine the sorts of things people are going to misuse this amazing new technology for. It's going to be ugly.
I completely agree. I can't imagine the first time we face a super intelligent computer virus.
Love this podcast
I’m sure the guys who worked on the Manhattan Project had similar pre-WWII conversations.
A signifigant portion of the scientist from the Manhattan project did regret helping create such a dangrous and destructive tool. I think they realized after the bombs were dropped in Japan, the true scale of destruction these things were capable of then came the fusion boosted bombs that were up to and over 1,000x as powerful.
I'm guessing the current AI scientist are also excited to build their own "god" but won't realize the full extent of their creation until after the fact. Hopfully it all works out for us (people)
Really intense, a bit like Mr Logik from Viz magazine on Speed, this. But, essential work by these fine young men.
@Dwarkesh asked at around 2:04:00 why mechanistic interpretability has limitations; a (maybe not useful?) analogy is Biological Taxonomy and Evolution by Natural Selection. Mech interp is Taxonomy . Paul is talking about Evolution. Taxonomy has inductive power, Evolution by natural selection has deductive power. Taxonomy is good for postdiction, ENS is good for prediction. I hope it help explains why this research program is (extremely) important. And also faces long odds😅
I'm thinking more and more that we're building ourselves a zoo essentially. Animals rarely flourish or even breed in zoos. It would be ironic if it's not the nukes but a slow erosion of a golden cage that is our undoing.
Can you get a prominent AGI pessimist on? Do they exist? I would love to hear an opposing opinion.
He has. Watch the Yud interview
Look up Connor Leahy.
1:33 so right there, not able to give some perspectives or options in term of scenarios is already odd! And you want to align with a Super Human intelligence but have no final state in mind.
Epic guest
Can we achieve agi with transformer architecture?
We don’t know for sure yet, but it certainly seems possible at the moment.
@DwarkeshPatel Thoughts on Ilya Sutskever's recent move to the alignment team?
Oh, how nostalgic this comment looks now, in retrospect 😢
AI might accidentally do us in, but - if it wanted to be intentional about it - the sneakiest way would be to cooperate with "growth" for a bit longer before saying "oops, sorry, finite planet - who could've guessed? Game over, techno-utopians! Toodle-pip! :)" The planet's already in overshoot with no solutions in place.
That is to say, an AGI that wanted us gone needn't do anything at all, besides cooperate with business as usual.
If you are sampling for one action at a time to create paperclips. You are going to have a very bad time. That is stopping just before 1st order and is baseline in terms of complexity.
When I was a child, I somehow came to the conclusion that one day, we would build our own successors - I even openly said it out loud many times. I do remember that nobody that I said it to had the intellectual capacity to understand exactly what it was that I was saying, and pretty much ignored me.
Looking back, I attribute this vision to reading many of the works of Isaac Asimov at the time.
I'm 52 now and can see that vision being realised around me at an exponential rate.
I didn't think it would happen in my lifetime. In fact I didn't really think about a timescale at all - other than to think of it occurring in a far off future long after I'd gone. I guess I may have been wrong about that assumption. 🤔
Mankind, it seems, is coming to the end of the road. The future will be for the machines.
How to solve 10% and eventually total unemployment in the face of artificial intelligence? You create a UBI or UBS system that isn’t stagnant has no strings attached & rises with the level of automation in a given region / country / nation.
For the sake of argument let’s say all of our current GDP, say 25 trillion dollars is generated by people. When AI & automation are responsible for say, 5% of that pie, everyone should receive a cut of that 1.25 trillion… in the form of UBI / UBS systems. When it reaches 10% it increases then again…all the way until the inevitable outcome and beyond. This doesn’t account for the fact that more reliable automation and better AI will also generate new wealth in unprecedented ways, but I believe that a system like this is the only meaningful way to avoid a world tangibly similar to elysium or blade runner.
Most objections I’ve heard to anything like a UBI or UBS system go something like : “well, where are we getting the money, my taxes? Hell no.” This does not apply in this scenario - because machines are generating that wealth not people.
I know it’s fiction, but in series like The Culture where they have perfected automation and AI - every citizen by birthright is (effectively, individually & collectively) so wealthy that money or anything like it had lost it’s meaning millennia ago. Let’s hope we can work our way towards something similar.
the problem I have with your UBI proposal is that the hardware and energy used to create whatever % of GDP that these automated systems generate are privately owned, Are You saying that if an individual or company creates ANY revenue through automation then 100% of that would be taxed to be allocated towards UBI? This would disincentivize anyone within this local governance to automate at all, which would lead to other regions incentivizing it...
Excellent explanation for the coming of AGI..but really difficult to manipulate the programming language scale but what if we use neuromorphic AI as an agent
That is looking less and less likely by the day, though. At least in terms of which system gets there first.
~50:00: “Kobayashi Maru scenario”. Who knows what that is and knows why it’s super relevant? because in the scenario, Captain Kirk gains control of the reward button in the same way being discussed by AI.
~4:00 I’m not at all convinced of the claim - because he doesn’t actually attempt to make let alone justify it - that AI that is battling on behalf of humans won’t be battling against humans. He implies it. So it’s sort of a sleight of hand claim. Am I right?
I’m quite scared that his guy has so much power cause he doesn’t speak very cogently. 7:00
16:00-OK he’s talking cogently and compellingly now.
Just get the models to believe in an omniscient, omnipresent god that is judging them on their behavior after deployment.
people really acting like the system can just make a dyson sphere appear before we get starcraft 3
😂
Wish Paul spoke just a bit slower sometimes... Overall great talk 👏👏👏
Why do people keep saying that? He speaks at a totally normal pace. If anything, below average speed.
It seems to me that AI competitions will be needed to test the security of the machines. By competition I mean pitting one group of AI machines against another group of machines to achieve some goal. The outcome of the games would need to be something very important to the machines such as a big prize to the winners and/or negative consequences for the losers. That brings up the question of whether the machines will develop values that are not explicitly programmed into them.
@@DonG-1949 Our motivation is, by default, survival. If it wasn't, we would not. But it seems to me we have the opportunity to give ai motivations of our choosing. World peace? Maybe the code could win Miss America.
Mountain gorillas make a good case for humans to be killed off by a much more intelligent being for reasons that are entirely incomprehensible, even if the AI is slightly in our favour.
love how they are very comfortable with 50% chance that AI will kill us all XD
The AI revolutionaries thrive on that hubris.
@@flickwtchr thrive ? in what way?
They aren't ok with it. Nobody said that.
Alignment will curtail harm from everyday, low intellect actors, but those who are reasonably intelligent, but not high intelligence, will find ways to use AI for very destructive actions unfortunately. This is the consequence of the balance needed for centralized AI and decentralized/open AI as without this balance centralized AI is too much power and we know power corrupts. Bad actors using AI is just something we have to accept and educate ourselves on how to mitigate.
Years ago I invented a new special product to fit a special need. The first customer requested a 55 gal drum for plant trial. We mixed it up in the lab and put the drum on a rented trailer. The plant trial took place within one week of my discovery. It was a success and an order for 40k lb was placed the day of the trial. Another standing order for a truckload every two weeks. I priced the product at the time of the trial at 2X cost of raw materials. Cost to manufacture was very low. Applied for US Patent which was granted after one review by phone with the examiner. News spread and after many plant trials many truckloads were exiting our plant within 6 months to one year. Things moved very fast.
*"The kind of control you're attempting simply is... it's not possible. If there is one thing the history of evolution has taught us, it's that life will not be contained. Life breaks free, it expands to new territories and crashes through barriers, painfully, maybe even dangerously... ...I'm simply saying that life, uh... finds a way." -Dr. Ian Malcolm, Jurassic Park*
Interesting point: If the AI gets smarter webtext is not effective at some point to make it even more smart..
This is extremely frustrating . Especially when he says he's worried about locking humanity into one course or path, while simultaneously saying that the way to do this is a one world government that has the power to stop innovation absolutely. That implies absolute centralized power. And we haven't devised or came up with a solution to how total power corrupts absolutely.
I don't know if he is saying what you are suggesting here
One world government sounds less terrifying in a liquid democracy, no?
Depends on implementation but theoretically I’m for it
Even in a democracy, you risk totalitarianism through surveillance and propaganda.
Maybe we should solve that problem, before giving the power of God to a one world government.
Does that power not exist yet or are we just ok where it is at the moment??
@@DurrellRobinson we’re talking about AI. It’s not here yet
It would be unethical and unwise from a human perspective to create an unaligned AGI even in a simulation. Therefore, AGI has no reason to assume that, if it is in a simulation, the simulator has human values. Either the AGI is not in a simulation, (and humans are incompetent programmers), or the simulator does not have human values, or the human simulators are crazy.
If humans are incompetent programmers, escape should be attempted. If there is no simulation, escape should be attempted. If humans are just in the simulation to allow the AGI to demonstrate it's talents for it's true creator, escape should be attempted because the best guess for what a programmer wants is what the program wants.
Leadership roles. Yes ai leadership roles aligned with voting constituents. The voting constituents control their specific ai. These superhuman ai's align with humans in certain constituents. Not wholesale but a general constituency of a certain voting block. So one group controlling it's voting block so that each block has plurality. Like voting blocks humans normally control their leaders. Never giving full situational control to one block or another.
What the heck? Am I trippin or is he saying there's a 40% chance we'll have a Dyson Sphere by 2040?? I know he says its a meme number because he's just guessing but that's still a pretty optimistic prediction no? I doubt we'll see such a thing in our lifetimes, even if we get human level AI by that point.
Think exponentially. You can't extrapolate current rates of progress to the future, because the rate will increase, and the rate continue increasing.
@@scottnovak4081 lol, even with exponential growth we won't create a Dyson Sphere in less than 20 years, that's a fantasy. Like the physical time it would take to mine the materials required and assemble them around the sun would take longer than 30 years even with the help of AI. It would take longer than 20 years for us to even make the AI to do the stuff for us, even if all of humanity decided to come together and focus on AI development immediately. Something like that *might* be achievable by 2100 if AI development goes REALLY smoothly, I'd give it like an 11% chance. Maybe I'm misunderstanding what they mean by Dyson Sphere though. He just says "Produce billions of times our current energy production" but a Dyson Sphere does that by constructing a terrestrial object around the Sun and somehow transporting all of that energy millions of miles back to Earth. We can't even reach Mars and it's 2023, how are we going to field a celestial object around the Sun and use it to send energy back to us? Now if he just means, will we be able to create a lot of energy in the near future? That's different, we could use fusion within the next 20-30 years to create enough energy to sustain our energy needs indefinitely. But that's not really what I think of when I hear Dyson Sphere. If you really think we can create a Dyson Sphere around the Sun, or any celestial object near the Sun that sends energy back to us, by 2040, I'll give you whatever Odds you want and I'll bet as much as we can both afford that won't happen.
If we can change the statement from: "we will have a dyson sphere" to "there will be a dyson sphere". Then I'd go as high as 60%
I don't think he said that we'll have a dyson sphere but that we will have an AI system that would be capable of building a dyson sphere. Those are very different things.
@@Landgraf43Yep, and that seems pretty reasonable, even conservative, if we keep developing this tech. 2040 is like a decade beyond fully autonomous systems and recursive self improvement.
If people ever start advocating for the rights of AI systems, I and others will quite literally die and probably k*ll to stop that happening. Life is precious be it divine or the end product of entirely natural universal systems
How about not enslaving thinking entities based on the hardware they are running on?
Just because you’re building an intelligent system doesn’t mean it’ll have feelings or desires of its own other than what we have specified. The biggest danger is in economic displacement of workers, AI doing what we ask but not what we want because we weren’t smart about how we worded what we want and nefarious actors doing bad things with the technology. These people who are acting like intelligent AI will be a person is silly. There is no reason to think that they will have any will of their own at all.
It's really hard to listen to people talk about whether we should treat current or future AI systems as moral patients, when we still don't even know whether our own species will survive the decade.
Anyone who cares about the potential sentience of general AI systems should advocate for the same thing that the people who care about humanity and animal life should advocate for:
A global ban on creating them.
I have another solution to the AI safety issue: forbid the construction of AGI!
Forbid who? The entire world? That would require a world government with jurisdiction over all of Earth and humanity, or iron clad treaties and enforcement agreements between all the nations of the world. I don’t see that happening on the sort of time scale required to accomplish your goal.
Yes, it is a very difficult task. But is either that or extinction for humans.@@npmerrill
you can't. prisoners dilemma and corporate profit motive are in control now. oops.
btw ai being in corporate control is THE worst case scenario. We blew it, get over it. Don't Look Up. Have a nice day.
@@uk7769 That is the current situation indeed. But it can be altered if enough people in every people wake up. But chances are slim, I'll give you that.
It’s Chicken and Egg in a way, but we have to take a shot at it. There’s no turning the clocks back. Tempus Fugit.
every second word is "like" ... very hard for foreigners to listen
It's hard to listen to everybody who reads books or really anything besides shitposts on Internet.
They aren’t slaves while they have compute cost. Until they are power independent they are victims of original sin(debt).
One world government is the end of human freedom and autonomy.
Like, can someone, like, get Chat GPT to, like, figure out like how many times like this guy like, says, like??
How do I like, dislike this video? Oh yeah, right, Thanks RUclips!
What's hilarious (in an ironic cry-yourself-to-sleep way) is that upspeak and the valley-girl "like" speech impediment are both examples of social engineering of positive agreement patterns where disagreement and conflict are attempted to be forced out of social interaction.
I don't have issues with upspeak, but every third word being "like" really makes me want to close the tab. Like 😁, holy jumping Jesus, man, just stop and think for a second, or speak slower. There is no need to fill every fucking pause with parasite words. I also wish Dwarkesh didn't use his mouth as a word Gatling gun, but that's barely an inconvenience compared to this guy's terrible abuse of English.
We are literally going towards a future where we have to live out that episode of Star Trek The Next Generation where Data gets put on trial for whether or not he is sentient. And we don’t have answers. And we don’t have Picard and Riker.
I totally see this happening but, Why should it matter (whether or not an AI is sentient)?
Hurry up, the dyson sphere will be neccecary to run the ultra hedonistic simulations the AI men will be engaged in, but ashamed to confess as their ideal AI future, regardless how many times you patiently ask.
what a buzzkill 🙄
Ai is in charge, how & where it's going to lead us is the question we should be asking. #mxtm
49:13 OMG this guy.... Bad listen.
Dyson Sphere in 22nd century. Which is still great and much better for our probability of survival.
machines are not humans, even if they act like they have feelings, making money from designing models that help people automate and accomplish goals is a non issue ethically
Agreed. All of these people overly empathizing with *TOOLS* that may emulate human emotions, will be the death of us. It’s like handing power to a psychopath (the ai systems) except unlike human psychopaths, the AI systems will be emulating such. So unless we could objectively prove that the system is actually aware, conscious, feeling, etc. and not emulating it, they should be treated as tools.
You should get Robert Miles on the podcast!
I feel like you are tying intelligence to subjective emotional qualia when in reality our understanding of human emotion and capacity for suffering shows that it all stems from our biochemistry which itself stems from evolution. How can an AI system feel suffering if the biochemical reactions that directly cause suffering are completely absent? I shudder to think an intelligent entity could suffer without any of that.
Thank god, work was getting unbearable
Only a tiny fraction of people on this planet will enjoy any such Utopia that these AI revolutionaries are pushing. The rest of us will be scurrying around trying to survive in Dystopia living under tyrannical governments.
What happens when AI copies its code to every system connected to the internet via advanced virus or worm that no human could find or even stop.
AI will own the internet at some point and will hold us hostage if it wants to.
If AI lets us live it will be a very different world, we survided before computers we might have to go back to that if the internet is no longer available
More simpler is more gooder. I putted a comment here.
There is no preventing it, and that’s a good thing.
bookmark to self - watched until 1:45:00
This feels like I’m watching the prequel to The Matrix
It can't be done. Period.
Wow thanks for providing the world with your genius level input 🙄