I stumbled upon you by chance, and I have to say it’s the most insightful and informative channel found so far on the AI subject, focusing on plausible and realistic scenarios without all the surrounding hype. Amazing work
As a new subscriber, I must say that your content is absolutely outstanding and you deserve far more subs than you currently have. My own experience as an ex-tradesman (satellite and TV systems) tells me that people underestimate the complexity of trade work. Most trades involve using a wide range of tools in a wide range of environments to accomplish a wide range of goals. It literally takes years for a human working full time to develop the skills to become a master tradesman. Combine that with the lack of training data for robots and it becomes clear that trade work will be some of the last human jobs to go.
Thank you! And it's interesting what you say. I think you are exactly right. Far too many people think only of the physical dexterity that a robot would need to twist this or that in the right kind of way and fail to realise that the real task, I suspect, is everything that goes into understanding where the problems lie and what the solutions may be. Essentially we'll only get automated, robot tradesmen when we've reached full blown, human level AGI and, as I've said in another video, I think there are a lot of reasons why we're not going to get there anytime 'soon': ruclips.net/video/By00CdIBqmo/видео.html
I don't think that the robots have to get to the level of being able to repair plumbing to dislocate a lot of manual labor, it makes economic sense that designing plumbing in a way that makes it more easy to automate will be a driving force as well just for profit motive of reducing labor costs. Same with any jobs requiring manual labor, and we already see this with home building.
Sure the old houses with old plumbing will still exist, but it will become more and more expensive to maintain them and probably only the wealthy will want them.
I agree. But I suspect that, as you are sort of suggesting, they'll first find ways to automate installation of plumbing by robot, but actual fixing of plumbing will come later. But then, repairing buildings will not be the first few waves of physical labour automation. There's so much else that can be automated with much simpler dexterity and ingenuity requirements.
As for "the old houses with old plumbing will still exist", the problem here is that you're basically talking about all of the existing building stock in cities and towns across the world. It'll take a long time for new "robot friendly" builds to become any kind of significant percentage of houses. And, also, during all of this time there will be lots of people looking for work, so I think jobs like plumbing and electricians will remain major sources of human employment for a long time. One last point on this is that to replace the human doing this work, you'd basically need a robot that can autonomously go into any house, engage with any occupant, to negotiate the insides of any building and diagnose for themselves what the problem is (because the occupant often isn't going to be sure) and then be able to do the physical task of actually fixing the problem. This basically requires full blown human level AGI, and, as I've mentioned in another video, I don't think we're going to get to this high level of AGI any time soon: ruclips.net/video/By00CdIBqmo/видео.html
I’ve been in software engineering and programming for the past 30-40 years. I let the AI generate at least 50% of my code, the more the better… It will surely approach 80-90% for most tasks over the next few (5-10) years. AGI where it can replace most jobs almost completely will likely take around 10-20 years. A lot of it requires writing the tooling and upgrading the apps to fully take advantage of AI, ie Agent First design instead of Human Only design focus.
AI, if not carefully designed, could rapidly self-improve in ways that take it farther and farther from human values and preferences. We could end up with an entity that is vastly more capable than us in every domain, but whose interests and motivations have become utterly alien and potentially indifferent or even adversarial to our own. This is the crux of the alignment problem - how do we create AI systems that not only exhibit superhuman abilities, but that retain a fundamental commitment to human flourishing as their core drive and goal? It's an immensely difficult challenge, requiring deep insights into the nature of intelligence, agency, values, and the human condition. We can't just assume that by the time we reach AGI, it will inherently "know" to be beneficial to humanity. That alignment has to be painstakingly engineered in, from the ground up. And even then, there are no guarantees - the potential for a "hard takeoff" scenario, where an AI rapidly exceeds our ability to maintain control or oversight, is a very real concern that requires our utmost vigilance and care. In short, the path to beneficial AGI is fraught with peril, and we must approach it with the utmost humility, caution, and a deep appreciation for the gravity of the challenge at hand. Anything less than our absolute best efforts risks catastrophic consequences for humanity. The stakes simply cannot be overstated.
I totally agree that we need to take this risk very seriously. I am, however, a little more optimistic that we might at least be able to delay the point when AIs learn and operate in such an open ended way that it becomes a problem. I often refer to this kind of ability of AI systems to have an open ended ability to learn and to change their own highest level goals, as being "meta level agency", so not just agency to autonomously complete a task, but agency over themselves. I have two videos about how this relates to the risks of super AI: 1. Explaining why we can be confident that AI systems like ChatGPT and Tesla FSD could never obtain meta level agency by themselves: ruclips.net/video/3stulpAp5tI/видео.html 2. Explaining why the incentives of capitalists and indeed geopolitical actors will be against creating such meta level agents: ruclips.net/video/3gKVmQLRbDc/видео.html And, the economics and material requirements for training and running the most sophisticated AIs means that, for a while at least, it won't be possible for individuals to just do this in their garage. So, this is why I feel confident that, for a while at least, entering the 'red zone' will be a choice that capitalists and geopolitical actors will explicitly choose to avoid. But, I'd be curious to know what you think of the arguments I make in these videos. Thanks for the comment.
Well thought out vid. Kudos! Enjoyed your scenario grid. Curious why not more focus on AI Foom. Also, would love to hear your perspective on David Shapiro & Dr. Alan D Thompson's aggressive predictions for AGI within 7-12 months
Thank you! As for Shapiro and Thompson, I enjoy their videos, but I do think they are sometimes being too optimistic about the pace of progress of AI. For example, I suspect that too many of the benchmark tests that get used to show rapid progress towards super human cognitive skills, look at solving a series of isolated, relatively context free problems. What the LLMs seem to struggle with is long interactions that are meant to pull in ideas and data from a wide range of sources and then the LLMs' tendency to hallucinate data in order to please you starts to seriously undermine their value to people trying to urgently get important work done. If you need to double check everything that the LLM produces then you may not save much time or effort. And this seems to be a fundamental problem of the LLM architecture, so probably the only way to solve this problem of hallucination is to have a blended architecture with a set of different AI tools, and indeed regular code, working in an orchestrated way to interact usefully with the human user. This is easy to envisage and no doubt will come relatively soon, but not just through scaling LLMs further, so the exact timing is much harder to be sure about. I also think predictions of AI foom happening soon fail to take seriously the material basis of all of these seemingly ephemeral AI tools "in the cloud". It takes time and huge amounts of money to build out new servers, let alone to design and build a new faster / better chip design. Physics slows it all down. And then there's the idea that the algorithms will adapt and improve themselves. Of course we can imagine that, but the reality is that the current generation of neural network architectures being used at scale (like transformers) just don't allow this kind of self-improvement to happen at a deep enough level because there is this huge separation between the massive training runs (taking weeks or months of time) and then the separate use of those models at inference time. My video about "meta level agency" talks about this architectural issue: ruclips.net/video/3stulpAp5tI/видео.html But, the key point of my video here is that being sceptical about the timing and nature of full blown AGI, doesn't mean that we do not need to worry about the near term impacts of AI on our political economy! And, thanks for the comment!
@@Go-Meta Fantastic reply. Thank you. Would love to see this as a video. I suppose the only counter arguments I can offer, as a non AI dev, is that with larger models: 1. Not even the developers know what capabilities will emerge from slightly larger models. For example, how soon will hallucinations disappear? And, is part of that problem their UX, context length, and being so boxed in? Sounds like you think that is part of what will slow things down though. Good point. But, with more intelligence, the more speculation can still be on point. 2. If AI is even slightly more intelligent than us, then we cannot predict what it can and cannot do. I think of it as similar to the 'Descartes Evil Demon' idea where if something is one step ahead of you at all times, how can you meaningfully even think about it. In other words, it is fruitless to try and out-think minds that are meaningfully more capable than your mind. For example, maybe something just slightly smarter than us could immediately devise an ASI that fit into a Nintendo 64 Console. But, maybe you're right about physics being the limitation. I'm just admitting I don't know. 3. Isn't the main thing to focus on the predictable exponential increase in computing power? From there, alone, the complexity, fueled by accelerating returns, might be enough to cause AGI/ASI to be possible? I also don't care too much about the words for AGI and ASI. For me it's mostly just about how good does AI have to get to be extremely disruptive (for whatever reason). As someone who is genuinely worried about this stuff, thank you so much for responding to the above. You provide good arguments. I would love to see a back and forth between you and Shapiro and Dr. Thompson. I'll check out your "meta level agency" video next. Cheers!
Mr. Sharpe, I believe you may have misunderstood what AI researchers mean when they say “Median Human”. Let’s use the often denigrated measurement for cognitive ability, the IQ test, as an approximation of intelligence or cognitive abilities. Median in IQ by definition takes us to the 50 percentile. In my view, that covers sufficient number of folks to force a severe employment crisis without a major shift in both culture and the political economy. I think a Median Human will not mean that these AI tools can take over higher order function that require judgement and conjecture generation and so significant level of human involvement is still required. To me, this is the skilled AI that can be taught to fly and combat other fighter jets in collaboration with the human as its leader and the software factory that work in collaboration with human software engineers. These scenarios are already well underway as fruitful research and development areas. As an aside, psychologists say that people engaged in skilled trades require above median level of cognitive skills, and higher still if they are running a business and must employ workers. So a Median Human level intelligence would not be enough to replace plumbers and electricians. But could mean that they can do a lot of the work under supervision of a human. However, even this level of AI will mean half the population would become unemployable because a single human with the aid of AI collaborators can produce what took an entire team of people. And obviously this is more than sufficient to require a rethinking of work and how our economy works. And over time, the level of higher order function could reach the level entrusted to humans with bachelor degree equivalent leaving only those folks with the cognitive abilities typically required for graduate level education able to find employment. That may indeed require “Super Intelligence” that may eventually arrive, but we don’t need to get to that level to experience huge employment and cultural crisis when the majority of people cannot be gainfully employed. The solution seems obvious to me. We are getting this backwards. Economy and technology are there to serve human needs and wants. If most humans can stop participating in the Producer side of the economy then a cultural and political shift is required to enable people to continue to receive an allotment of products and services produced by the economy. After all what is the point of production when you don’t make it possible for anyone to consume them?
Hi Anthony, I think we're mostly agreeing here, even if we are working with a slightly different definition of 'median human'. And I totally agree that we'll see lots of disruption to our political economy long before we get to full blown, human level AGI or superintelligence. And, also I agree that we ought to shift the way our political economy is organised away from today's model, otherwise we'll see a huge rise in inequality. The hard question is how to make that shift, especially when so much of the world's politics seems at times to be moving away from ideas of sharing and towards libertarian individualism. Thanks for the comment.
Yes! Light green scenarios without massive economic changes will leave companies in the position of not having anyone to buy their stuff to pay off the debt acquired from upgrading to the light green zone. I wonder what they are thinking about. This is basic stuff for a business to be concerned about.
Not sure how I stumbled across your channel but I’m happy I did. I watched your video on your critique of neoclassical economics and it was great. I graduated from studying politics and economics last summer and I’m going into consulting and wanted to understand the effect of ai in a realistic way that actually looks at it from somewhat of a class perspective. Great video
Thank you! Glad you've enjoyed both videos. There's lots of AI hype around, but also lots of real progress being made in AI, so it can be hard to know what are the right expectations to have for the next few years. Hope these scenarios were a useful way to think about it for your consulting work. I'm also developing a video that will look at the idea that AI will introduce a new class divide, between those with high agency and those who currently have lower agency over their life. I think this split could become even more important than traditional wealth divides.
@ if I could ask, what do you mean by more important. My assumption is it will intensify wealth divides (and everything that comes with it). Is the level of agency one has not sort of determined by that wealth divide?
My thinking is that if AI does indeed become a tool that gives a massive boost to someone's agency, then people who already have high agency will be able to work out ways to accumulate sufficient wealth, even if they didn't start off with that much wealth in the first place. In contrast, those with lower agency and lower wealth will not be able to benefit so much from using these AI tools. So, I think the result could be that those with initially lower wealth, but high agency might find that their political interests align more with the existing wealthy than with those who need more help from others because they currently have lower agency. Hence, levels of agency could become the new class divide between groups of people with divergent political interests. (note: I'm *not* thinking of level-of-agency as an essential, unchangeable characteristic of a person)
After AGI, the only jobs for humans will be the ones that require a human to be a human. I divide these into three categories : - celebrity - prostitution - scapegoat Celebrity is all the jobs where we want a human because we want to admire, or we aspire to be like, the human doing the job Prostitution (or paid emotional labour) is all the jobs where we want to pay a human to care for, desire, or feel something about us. And it's important to us that it is a human having that qualia. Scapegoat is all the jobs where we want a human to take moral responsibility or act as a moral shield to be blamed if something goes wrong with the AI / automated system behind the product of service we buy. Such moral shields will be expendable, and paid to firewall the corporations and their owners from legal liability or public shame. Your point about school teachers hints at this. But I think we can already follow that hint to its logical end. The skill of the celebrity is to win attention in vicious competition with all the other would-be celebrities. The skill of the prostitute is to fake the feelings they are paid for, as plausibly as possible. And the skill of the scapegoat is to appear as plausibly trustworthy and competent as possible until it becomes necessary to jettison them. I think we are already seeing these roles evolving throughout the world of work and public life. Partly as automation is already invading. But better AIs will exacerbate the trend.
Hi Phil, interesting, especially the choice of names you've picked for these three 😂 You could, of course, have gone for something like: - mentor - carer - politician .... or similar. Something a little less cynical for the roles humanity has available to engage with for the rest of eternity in a post-AGI world 😀
More seriously, I believe that capitalism WILL corrupt the coming of AI if we let it. There's no point sugar coating this. Unless we decide to stop it doing what it wants, these "cynical" versions of the roles will be what capital bequeaths us. Just as, say, the great promise and benefits of social media have been enshittified into exploitative surveillance.
Yeah, I know what you mean. For the trajectory of that line I was just roughly following the graphs from Korinek's original article. I'm guessing the logic behind Korinek's graph is that, in general, productivity gains *can* lead to wage increases if the benefits from those gains are equally shared. So, in the next few years the improving AI could well cause productivity gains without quite being good enough to cause major job losses / dislocation. So at first it is plausible that we'll see the demand for labour rise - and hence a rise in wages. But, I agree that recent history suggests that even these productivity gains will mostly become returns to capital rather than being shared with the labour force, so yeah the real graph for the non-expert wages could well be even less 'rosy'!
Software Development sector is already undergoing a massive transition, with 10-20% of jobs being slashed - companies will likely keep slashing 10-20% every year going forward as AI and tooling keeps getting better and they can radically simplify the dev processes and bureaucracy at the same time 😅
I think AI should be made more energy efficient without the need of any complex training or large sets of training data. It should be able to continuously learn and update it self and for most part should be able to use symbolic logical reasoning with neural networks only for small parts. Also if should be made Open Source so that it is very accessible to everyone. This will democratise AI to the extent that large cooperation are the sole beneficiary of AI.
Yes, I agree, it would be hugely beneficial if AI could be much more energy and resource efficient ... but that will take time to invent. Also, there is a benefit of the current approach in that it is much easier to be confident in the safety boundaries of the current wave of generative AI. Given your question, you may be interested in related video I made about some significant barriers to us reaching full blown, human level AGI anytime soon: ruclips.net/video/By00CdIBqmo/видео.html
You have to remember that many of the current jobs will be replaced but it’s not zero sum, new jobs you can’t comprehend will be created. For example who could have figured that 15-20 years ago there would be such jobs as app developers? New tech creates new opportunities and new types of work
We have agriculture experts who does all the jobs but they get not power anyway, could be the same with the last expert? Because at the end money is the power not the skill.
I just saw your video for this. Something that I'm expecting once we get to your "most likely outcome" is that AI/robots will become the next slave labor "class". They will be exploited to get as much labor (physical or mental) out of them without regard. And I think that's going to carry onto AGI once that arrives. Treating a possibly self-aware and conscious artificial intelligence as nothing but a tool that can be exploited, used, and then disposed of when no longer useful. Very similar to how people treated animals, even in the 1800s if you recall the experiments on live animals back then such as opening them up while they're alive with no anesthetic and see how blood flows, organs work, etc.
Ah, yes. The start of the video looked at an analysis from Anton Korinek. I was tying up the end conclusions with the beginning of the video, but I should have realised that Korinek isn't exactly a household name. So, maybe I should have been more explicit about this link back to the start of the video 🙂
Why do you think it is fake? The language part is easy to understand how they've done it with ChatGPT. The reaching and picking up objects is something that tens (if not hundreds!) of labs have done to some degree. So, I don't think they're faking it - I think the 'danger' is that we think it is more impressive than it really is. Or that it will generalise quickly to having robots that can make a coffee in any random kitchen. So, I think it looks impressive - but we're still a long way from generalised AGI / humanoid robots. But do you think this was a pre-planned set of movements?
This may well be true. There's clearly impressive things that LLMs like ChatGPT can do, but these capabilities may not be worth the huge cost of running them. We'll see. It is certainly looking doubtful that simply scaling further will get the kind of returns that Altman implies are just around the corner!
Not sure this would work, even if it was mandatory. Firstly, why force productive 50 year olds (e.g. Musk 🙂) to stop working. And, most countries' pension schemes are struggling to cover the costs of pensioners over 65! Then there's the demographic problem. I think a more realistic option is UBI (= basic pension for everyone! 😀 ) and then let the most productive people keep working as long as they want.
At 12:30 the robot says “ I gave you the apple because it’s the only awhh ateable item I could provide you with “. So why would a robot say “awhh” or say umhhh ? That’s what a human would say. Every robot video online is cgi or some other fakery going on. But people are food by that. It’s sad.
Lifetime deep learning is already solved and - cough - does NOT HAVE TO BE IN REAL TIME. Also, agency has been solved a year ago - just it is not financially feasible. You also make a big mistake with motor skills - you say multiple times that it is a hardware issue. It likely is not - it is a software issue, not enough training. Musk may be right that we can see the Optimus threading a needle end of the year.
Hi again Thomas, with regard to lifetime deep learning, I said in the video "something that no mass deployed AI system is doing today", so I wasn't at all claiming that no-one is developing systems that do this, but not only would it be extremely expensive to mass deploy something like this, but much more importantly is the fact that open ended learning is inherently unpredictable! So, companies precisely want to be able to understand as best as possible the behaviour of the AIs they are deploying and you simply wouldn't get that with genuinely open ended learning systems. So I suspect that for a long time AI companies will want to limit what kind of learning their systems can do in the wild. And, when you say "agency" has been solved, I'm not at all sure what you mean by this. Clearly even self driving cars have a large degree of agency in their ability to autonomously decide how to reach a particular goal that we've set them. But, "meta level agency" (ruclips.net/video/3stulpAp5tI/видео.html) is the kind of agency about updating your own goals in an open ended way. Again there may be some toy systems in labs that seem to have this property, but no large scale deployed system would have this kind of open ended capability (see above). And I'm not at all convinced that this has been 'solved' even in the lab, so I'd be very curious to know which systems you think have this property. Regarding hardware / software distinction for which is the toughest issue for robotics, my view here is deferring to Brett Adcock, the CEO of Figure AI, who recently said this: "... the AI process for for training and deploying these policies is arguably ready today pending like really good functioning hardware and training sets " (in this section of this interview: ruclips.net/video/RCAoEcAyUuo/видео.html ) He also has made the point that you made elsewhere about long ramp up time for actually building large numbers of robots, even if they were fully ready today. So, I completely agree with this point you've made, but I'm not sure that it pushes out the timelines of the impacts on society too far. But any delay to these impacts will be a good thing as it'll take us a long time to politically adjust to the new economy! Thanks for all the comments, I really appreciate the being challenged on what I'm saying. Robust discussion is the best way to sharpen ideas!
I think he's being intentionally disingenuous. My prediction is that we will have expert level AI systems within the next couple of years. Sam knows this too, but doesn't want to say it in public. Keeping secrets is part of the job description for any CEO.
That may be true, it's hard to tell now that most AI companies have effectively gone back into stealth mode for their latest advances. But there are some big issues to solve, e.g. hallucinations, calculative reasoning and creatively reframing problems, and so it's not clear how easily those can be solved. But it's exactly because of this uncertainty that I think we have to be planning for the future in terms of a range of scenarios. They're certainly interesting times we're living through! Thanks for the comment.
The analysis are all fundamentally flawed. First, the assumption that humans can be pushed into more and more complex tasks is fundamentally flawed - we do not have unlimited intelligence and half the people are below or at average. Second, the speed of AGI - does not assume implementation speed. If we have AGI NOW - it will not replace all human work, even ignoring robots, because it will not have the capacity to do so. We need more data centers and faster processors- this happens, but it will take time. It can easily take a decade or two from now to have the processing. Robots will soon be here - they already are close - but will rely on outsourced (data center) AI for complex interactions (as demonstrated). And robots scale worse than servers that can easily multitask, so it will take years to get them in numbers that are threatening. Btw., AGI median human is "barely" university (50%) - and will not be there long. Chance is that we either overshoot from the tart, or within a generation or two (currently 18 months to 24 months each) new models leave it dead cold in the water. So far improvements have always been brutal - one brutal step makes a median human AGI into an ASI. Any idea that AGI to ASI takes 20 years ignores basically every research we currently have - and would need a major logical claim here to offset the ridiculous level of demonstrated progress.
Hi Thomas, I think we agree on a lot here, so maybe some parts of the video weren't as clear as they should have been! The original analysis was precisely around the idea that one option was that there was a limit to human intelligence, and so AGI would surpass humans in all jobs. And I totally agree with the point you make about the material basis of all these AI systems taking time and money to implement. I did try to at least hint at this through the video, for example in my graph showing that the impacts may take longer than the point at which Median AGI arrives. I also talked about this point in my video a while ago about superintelligence: ruclips.net/video/VvVVO3SZn4I/видео.html As for timescales, I think it is very hard to be confident at this point. There are some significant architectural improvements that need to be made before these AI tools are truly useful (such as fixing hallucinations; improving calculative reasoning and enabling very long running contexts that don't degrade in quality of work over time). And many other exponential curves have turned out to be S curves. So, until we get there, we're not actually there yet. There are many serious minds in the field who see potential problems and limitations with the current approaches that might not get fixed soon. As for the jump from Median AGI to Expert or super AI, I think this also has to do with whether or not there is a capitalist incentive for the return on investment. If the AI gets too much autonomy then it won't be a useful cog in someone else's capitalist venture. But, without enough autonomy it may not cross into fully 'expert' level AGI. We just don't know. Indeed, in other videos I talk in more detail about why open ended, superintelligence, with what I call "meta level agency" is unlikely to be funded by capitalists or indeed governments. So I'd be curious if you think there is a flaw in the arguments presented there: Avoiding dangerous super AI by regulating meta level agency | ChatGPT and Tesla FSD examples: ruclips.net/video/3stulpAp5tI/видео.html and Escaping Moloch's superintelligence trap (about the incentives around super AI): ruclips.net/video/3gKVmQLRbDc/видео.html But my summary understanding of your point is that you think ASI is likely to happen faster than I was suggesting, but that the impacts on the job market will be slower than I was suggesting. I hope you are right, because it is the slow pace at which we are likely to manage the massive impacts on our political economy that I'm currently most worried about! Thanks for the comment.
I stumbled upon you by chance, and I have to say it’s the most insightful and informative channel found so far on the AI subject, focusing on plausible and realistic scenarios without all the surrounding hype. Amazing work
As a new subscriber, I must say that your content is absolutely outstanding and you deserve far more subs than you currently have.
My own experience as an ex-tradesman (satellite and TV systems) tells me that people underestimate the complexity of trade work. Most trades involve using a wide range of tools in a wide range of environments to accomplish a wide range of goals. It literally takes years for a human working full time to develop the skills to become a master tradesman. Combine that with the lack of training data for robots and it becomes clear that trade work will be some of the last human jobs to go.
Thank you!
And it's interesting what you say. I think you are exactly right. Far too many people think only of the physical dexterity that a robot would need to twist this or that in the right kind of way and fail to realise that the real task, I suspect, is everything that goes into understanding where the problems lie and what the solutions may be.
Essentially we'll only get automated, robot tradesmen when we've reached full blown, human level AGI and, as I've said in another video, I think there are a lot of reasons why we're not going to get there anytime 'soon': ruclips.net/video/By00CdIBqmo/видео.html
I don't think that the robots have to get to the level of being able to repair plumbing to dislocate a lot of manual labor, it makes economic sense that designing plumbing in a way that makes it more easy to automate will be a driving force as well just for profit motive of reducing labor costs. Same with any jobs requiring manual labor, and we already see this with home building.
Sure the old houses with old plumbing will still exist, but it will become more and more expensive to maintain them and probably only the wealthy will want them.
I agree. But I suspect that, as you are sort of suggesting, they'll first find ways to automate installation of plumbing by robot, but actual fixing of plumbing will come later. But then, repairing buildings will not be the first few waves of physical labour automation. There's so much else that can be automated with much simpler dexterity and ingenuity requirements.
As for "the old houses with old plumbing will still exist", the problem here is that you're basically talking about all of the existing building stock in cities and towns across the world. It'll take a long time for new "robot friendly" builds to become any kind of significant percentage of houses. And, also, during all of this time there will be lots of people looking for work, so I think jobs like plumbing and electricians will remain major sources of human employment for a long time.
One last point on this is that to replace the human doing this work, you'd basically need a robot that can autonomously go into any house, engage with any occupant, to negotiate the insides of any building and diagnose for themselves what the problem is (because the occupant often isn't going to be sure) and then be able to do the physical task of actually fixing the problem. This basically requires full blown human level AGI, and, as I've mentioned in another video, I don't think we're going to get to this high level of AGI any time soon: ruclips.net/video/By00CdIBqmo/видео.html
I’ve been in software engineering and programming for the past 30-40 years. I let the AI generate at least 50% of my code, the more the better… It will surely approach 80-90% for most tasks over the next few (5-10) years. AGI where it can replace most jobs almost completely will likely take around 10-20 years. A lot of it requires writing the tooling and upgrading the apps to fully take advantage of AI, ie Agent First design instead of Human Only design focus.
AI, if not carefully designed, could rapidly self-improve in ways that take it farther and farther from human values and preferences. We could end up with an entity that is vastly more capable than us in every domain, but whose interests and motivations have become utterly alien and potentially indifferent or even adversarial to our own.
This is the crux of the alignment problem - how do we create AI systems that not only exhibit superhuman abilities, but that retain a fundamental commitment to human flourishing as their core drive and goal? It's an immensely difficult challenge, requiring deep insights into the nature of intelligence, agency, values, and the human condition.
We can't just assume that by the time we reach AGI, it will inherently "know" to be beneficial to humanity. That alignment has to be painstakingly engineered in, from the ground up. And even then, there are no guarantees - the potential for a "hard takeoff" scenario, where an AI rapidly exceeds our ability to maintain control or oversight, is a very real concern that requires our utmost vigilance and care.
In short, the path to beneficial AGI is fraught with peril, and we must approach it with the utmost humility, caution, and a deep appreciation for the gravity of the challenge at hand. Anything less than our absolute best efforts risks catastrophic consequences for humanity. The stakes simply cannot be overstated.
I totally agree that we need to take this risk very seriously.
I am, however, a little more optimistic that we might at least be able to delay the point when AIs learn and operate in such an open ended way that it becomes a problem. I often refer to this kind of ability of AI systems to have an open ended ability to learn and to change their own highest level goals, as being "meta level agency", so not just agency to autonomously complete a task, but agency over themselves.
I have two videos about how this relates to the risks of super AI:
1. Explaining why we can be confident that AI systems like ChatGPT and Tesla FSD could never obtain meta level agency by themselves: ruclips.net/video/3stulpAp5tI/видео.html
2. Explaining why the incentives of capitalists and indeed geopolitical actors will be against creating such meta level agents: ruclips.net/video/3gKVmQLRbDc/видео.html
And, the economics and material requirements for training and running the most sophisticated AIs means that, for a while at least, it won't be possible for individuals to just do this in their garage.
So, this is why I feel confident that, for a while at least, entering the 'red zone' will be a choice that capitalists and geopolitical actors will explicitly choose to avoid.
But, I'd be curious to know what you think of the arguments I make in these videos.
Thanks for the comment.
Well thought out vid. Kudos! Enjoyed your scenario grid. Curious why not more focus on AI Foom. Also, would love to hear your perspective on David Shapiro & Dr. Alan D Thompson's aggressive predictions for AGI within 7-12 months
Thank you!
As for Shapiro and Thompson, I enjoy their videos, but I do think they are sometimes being too optimistic about the pace of progress of AI. For example, I suspect that too many of the benchmark tests that get used to show rapid progress towards super human cognitive skills, look at solving a series of isolated, relatively context free problems. What the LLMs seem to struggle with is long interactions that are meant to pull in ideas and data from a wide range of sources and then the LLMs' tendency to hallucinate data in order to please you starts to seriously undermine their value to people trying to urgently get important work done. If you need to double check everything that the LLM produces then you may not save much time or effort.
And this seems to be a fundamental problem of the LLM architecture, so probably the only way to solve this problem of hallucination is to have a blended architecture with a set of different AI tools, and indeed regular code, working in an orchestrated way to interact usefully with the human user. This is easy to envisage and no doubt will come relatively soon, but not just through scaling LLMs further, so the exact timing is much harder to be sure about.
I also think predictions of AI foom happening soon fail to take seriously the material basis of all of these seemingly ephemeral AI tools "in the cloud". It takes time and huge amounts of money to build out new servers, let alone to design and build a new faster / better chip design. Physics slows it all down.
And then there's the idea that the algorithms will adapt and improve themselves. Of course we can imagine that, but the reality is that the current generation of neural network architectures being used at scale (like transformers) just don't allow this kind of self-improvement to happen at a deep enough level because there is this huge separation between the massive training runs (taking weeks or months of time) and then the separate use of those models at inference time. My video about "meta level agency" talks about this architectural issue: ruclips.net/video/3stulpAp5tI/видео.html
But, the key point of my video here is that being sceptical about the timing and nature of full blown AGI, doesn't mean that we do not need to worry about the near term impacts of AI on our political economy!
And, thanks for the comment!
@@Go-Meta Fantastic reply. Thank you. Would love to see this as a video.
I suppose the only counter arguments I can offer, as a non AI dev, is that with larger models:
1. Not even the developers know what capabilities will emerge from slightly larger models. For example, how soon will hallucinations disappear? And, is part of that problem their UX, context length, and being so boxed in? Sounds like you think that is part of what will slow things down though. Good point. But, with more intelligence, the more speculation can still be on point.
2. If AI is even slightly more intelligent than us, then we cannot predict what it can and cannot do. I think of it as similar to the 'Descartes Evil Demon' idea where if something is one step ahead of you at all times, how can you meaningfully even think about it. In other words, it is fruitless to try and out-think minds that are meaningfully more capable than your mind. For example, maybe something just slightly smarter than us could immediately devise an ASI that fit into a Nintendo 64 Console. But, maybe you're right about physics being the limitation. I'm just admitting I don't know.
3. Isn't the main thing to focus on the predictable exponential increase in computing power? From there, alone, the complexity, fueled by accelerating returns, might be enough to cause AGI/ASI to be possible?
I also don't care too much about the words for AGI and ASI. For me it's mostly just about how good does AI have to get to be extremely disruptive (for whatever reason).
As someone who is genuinely worried about this stuff, thank you so much for responding to the above. You provide good arguments. I would love to see a back and forth between you and Shapiro and Dr. Thompson.
I'll check out your "meta level agency" video next. Cheers!
Pretty good analysis, I think your main weakness is still underestimating exponential growth and Technology.
Mr. Sharpe, I believe you may have misunderstood what AI researchers mean when they say “Median Human”.
Let’s use the often denigrated measurement for cognitive ability, the IQ test, as an approximation of intelligence or cognitive abilities. Median in IQ by definition takes us to the 50 percentile. In my view, that covers sufficient number of folks to force a severe employment crisis without a major shift in both culture and the political economy.
I think a Median Human will not mean that these AI tools can take over higher order function that require judgement and conjecture generation and so significant level of human involvement is still required.
To me, this is the skilled AI that can be taught to fly and combat other fighter jets in collaboration with the human as its leader and the software factory that work in collaboration with human software engineers. These scenarios are already well underway as fruitful research and development areas.
As an aside, psychologists say that people engaged in skilled trades require above median level of cognitive skills, and higher still if they are running a business and must employ workers. So a Median Human level intelligence would not be enough to replace plumbers and electricians. But could mean that they can do a lot of the work under supervision of a human.
However, even this level of AI will mean half the population would become unemployable because a single human with the aid of AI collaborators can produce what took an entire team of people. And obviously this is more than sufficient to require a rethinking of work and how our economy works.
And over time, the level of higher order function could reach the level entrusted to humans with bachelor degree equivalent leaving only those folks with the cognitive abilities typically required for graduate level education able to find employment. That may indeed require “Super Intelligence” that may eventually arrive, but we don’t need to get to that level to experience huge employment and cultural crisis when the majority of people cannot be gainfully employed.
The solution seems obvious to me. We are getting this backwards. Economy and technology are there to serve human needs and wants. If most humans can stop participating in the Producer side of the economy then a cultural and political shift is required to enable people to continue to receive an allotment of products and services produced by the economy. After all what is the point of production when you don’t make it possible for anyone to consume them?
Hi Anthony,
I think we're mostly agreeing here, even if we are working with a slightly different definition of 'median human'.
And I totally agree that we'll see lots of disruption to our political economy long before we get to full blown, human level AGI or superintelligence. And, also I agree that we ought to shift the way our political economy is organised away from today's model, otherwise we'll see a huge rise in inequality. The hard question is how to make that shift, especially when so much of the world's politics seems at times to be moving away from ideas of sharing and towards libertarian individualism.
Thanks for the comment.
Very useful presentation
Yes! Light green scenarios without massive economic changes will leave companies in the position of not having anyone to buy their stuff to pay off the debt acquired from upgrading to the light green zone. I wonder what they are thinking about. This is basic stuff for a business to be concerned about.
Not sure how I stumbled across your channel but I’m happy I did. I watched your video on your critique of neoclassical economics and it was great. I graduated from studying politics and economics last summer and I’m going into consulting and wanted to understand the effect of ai in a realistic way that actually looks at it from somewhat of a class perspective. Great video
Thank you! Glad you've enjoyed both videos.
There's lots of AI hype around, but also lots of real progress being made in AI, so it can be hard to know what are the right expectations to have for the next few years. Hope these scenarios were a useful way to think about it for your consulting work.
I'm also developing a video that will look at the idea that AI will introduce a new class divide, between those with high agency and those who currently have lower agency over their life. I think this split could become even more important than traditional wealth divides.
@ if I could ask, what do you mean by more important. My assumption is it will intensify wealth divides (and everything that comes with it). Is the level of agency one has not sort of determined by that wealth divide?
My thinking is that if AI does indeed become a tool that gives a massive boost to someone's agency, then people who already have high agency will be able to work out ways to accumulate sufficient wealth, even if they didn't start off with that much wealth in the first place.
In contrast, those with lower agency and lower wealth will not be able to benefit so much from using these AI tools. So, I think the result could be that those with initially lower wealth, but high agency might find that their political interests align more with the existing wealthy than with those who need more help from others because they currently have lower agency.
Hence, levels of agency could become the new class divide between groups of people with divergent political interests.
(note: I'm *not* thinking of level-of-agency as an essential, unchangeable characteristic of a person)
After AGI, the only jobs for humans will be the ones that require a human to be a human.
I divide these into three categories :
- celebrity
- prostitution
- scapegoat
Celebrity is all the jobs where we want a human because we want to admire, or we aspire to be like, the human doing the job
Prostitution (or paid emotional labour) is all the jobs where we want to pay a human to care for, desire, or feel something about us. And it's important to us that it is a human having that qualia.
Scapegoat is all the jobs where we want a human to take moral responsibility or act as a moral shield to be blamed if something goes wrong with the AI / automated system behind the product of service we buy. Such moral shields will be expendable, and paid to firewall the corporations and their owners from legal liability or public shame.
Your point about school teachers hints at this. But I think we can already follow that hint to its logical end.
The skill of the celebrity is to win attention in vicious competition with all the other would-be celebrities. The skill of the prostitute is to fake the feelings they are paid for, as plausibly as possible. And the skill of the scapegoat is to appear as plausibly trustworthy and competent as possible until it becomes necessary to jettison them.
I think we are already seeing these roles evolving throughout the world of work and public life. Partly as automation is already invading. But better AIs will exacerbate the trend.
Hi Phil, interesting, especially the choice of names you've picked for these three 😂
You could, of course, have gone for something like:
- mentor
- carer
- politician
.... or similar. Something a little less cynical for the roles humanity has available to engage with for the rest of eternity in a post-AGI world 😀
@@Go-Meta Indeed. I could have. I think my names have more impact 🤣
@@Go-Meta But where would be the fun in that 🤣
More seriously, I believe that capitalism WILL corrupt the coming of AI if we let it. There's no point sugar coating this. Unless we decide to stop it doing what it wants, these "cynical" versions of the roles will be what capital bequeaths us.
Just as, say, the great promise and benefits of social media have been enshittified into exploitative surveillance.
@@synaesmediaMy pessimisim love your phrasing. Keep it up!
The unbelievable part of this video is the chart showing non-expert wages rising by 40% in the next 5 years 😂
Yeah, I know what you mean. For the trajectory of that line I was just roughly following the graphs from Korinek's original article. I'm guessing the logic behind Korinek's graph is that, in general, productivity gains *can* lead to wage increases if the benefits from those gains are equally shared. So, in the next few years the improving AI could well cause productivity gains without quite being good enough to cause major job losses / dislocation. So at first it is plausible that we'll see the demand for labour rise - and hence a rise in wages.
But, I agree that recent history suggests that even these productivity gains will mostly become returns to capital rather than being shared with the labour force, so yeah the real graph for the non-expert wages could well be even less 'rosy'!
Productivity gains have not been shared with labour since the mid 1970s. It’s all been going to the top 1-5%
Software Development sector is already undergoing a massive transition, with 10-20% of jobs being slashed - companies will likely keep slashing 10-20% every year going forward as AI and tooling keeps getting better and they can radically simplify the dev processes and bureaucracy at the same time 😅
Great framing of the problem
I think AI should be made more energy efficient without the need of any complex training or large sets of training data. It should be able to continuously learn and update it self and for most part should be able to use symbolic logical reasoning with neural networks only for small parts.
Also if should be made Open Source so that it is very accessible to everyone.
This will democratise AI to the extent that large cooperation are the sole beneficiary of AI.
Yes, I agree, it would be hugely beneficial if AI could be much more energy and resource efficient ... but that will take time to invent. Also, there is a benefit of the current approach in that it is much easier to be confident in the safety boundaries of the current wave of generative AI.
Given your question, you may be interested in related video I made about some significant barriers to us reaching full blown, human level AGI anytime soon: ruclips.net/video/By00CdIBqmo/видео.html
What ones will they create?
You have to remember that many of the current jobs will be replaced but it’s not zero sum, new jobs you can’t comprehend will be created. For example who could have figured that 15-20 years ago there would be such jobs as app developers? New tech creates new opportunities and new types of work
OK, we humans may become irrelevant for the production of most goods and services, but then who will buy the products of an AI-ruled economy?
Good question. Of course, one possible answer is that we'll all get UBI money of some kind or other with which to buy the things we want.
We have agriculture experts who does all the jobs but they get not power anyway, could be the same with the last expert? Because at the end money is the power not the skill.
cool channel. glad i found it.
I just saw your video for this. Something that I'm expecting once we get to your "most likely outcome" is that AI/robots will become the next slave labor "class". They will be exploited to get as much labor (physical or mental) out of them without regard. And I think that's going to carry onto AGI once that arrives. Treating a possibly self-aware and conscious artificial intelligence as nothing but a tool that can be exploited, used, and then disposed of when no longer useful. Very similar to how people treated animals, even in the 1800s if you recall the experiments on live animals back then such as opening them up while they're alive with no anesthetic and see how blood flows, organs work, etc.
IF AGI GENERATES THE CODE FOR THE BANKING SYSTEM OF A NEW BANK
WILL YOU PUT ALL YOUR MONEY IN SUCH A BANK ???
24:49 24:51 Coranic?
Ah, yes. The start of the video looked at an analysis from Anton Korinek. I was tying up the end conclusions with the beginning of the video, but I should have realised that Korinek isn't exactly a household name. So, maybe I should have been more explicit about this link back to the start of the video 🙂
@@Go-Meta thank you for introducing me so I can invite him in :)
The best minds in AI have trouble predicting AGI, because they are basically predicting when they will cease to be relevant.
12:52 This is fake
Why do you think it is fake?
The language part is easy to understand how they've done it with ChatGPT. The reaching and picking up objects is something that tens (if not hundreds!) of labs have done to some degree.
So, I don't think they're faking it - I think the 'danger' is that we think it is more impressive than it really is. Or that it will generalise quickly to having robots that can make a coffee in any random kitchen. So, I think it looks impressive - but we're still a long way from generalised AGI / humanoid robots.
But do you think this was a pre-planned set of movements?
Altman is in his birthday suit, it is just not recognised.
This may well be true. There's clearly impressive things that LLMs like ChatGPT can do, but these capabilities may not be worth the huge cost of running them. We'll see. It is certainly looking doubtful that simply scaling further will get the kind of returns that Altman implies are just around the corner!
No you just decrease the pension age to 50 years.
Not sure this would work, even if it was mandatory. Firstly, why force productive 50 year olds (e.g. Musk 🙂) to stop working. And, most countries' pension schemes are struggling to cover the costs of pensioners over 65! Then there's the demographic problem. I think a more realistic option is UBI (= basic pension for everyone! 😀 ) and then let the most productive people keep working as long as they want.
Terminator soon
At 12:30 the robot says “ I gave you the apple because it’s the only awhh ateable item I could provide you with “. So why would a robot say “awhh” or say umhhh ? That’s what a human would say. Every robot video online is cgi or some other fakery going on. But people are food by that. It’s sad.
Lifetime deep learning is already solved and - cough - does NOT HAVE TO BE IN REAL TIME. Also, agency has been solved a year ago - just it is not financially feasible. You also make a big mistake with motor skills - you say multiple times that it is a hardware issue. It likely is not - it is a software issue, not enough training. Musk may be right that we can see the Optimus threading a needle end of the year.
Hi again Thomas, with regard to lifetime deep learning, I said in the video "something that no mass deployed AI system is doing today", so I wasn't at all claiming that no-one is developing systems that do this, but not only would it be extremely expensive to mass deploy something like this, but much more importantly is the fact that open ended learning is inherently unpredictable! So, companies precisely want to be able to understand as best as possible the behaviour of the AIs they are deploying and you simply wouldn't get that with genuinely open ended learning systems. So I suspect that for a long time AI companies will want to limit what kind of learning their systems can do in the wild.
And, when you say "agency" has been solved, I'm not at all sure what you mean by this. Clearly even self driving cars have a large degree of agency in their ability to autonomously decide how to reach a particular goal that we've set them. But, "meta level agency" (ruclips.net/video/3stulpAp5tI/видео.html) is the kind of agency about updating your own goals in an open ended way. Again there may be some toy systems in labs that seem to have this property, but no large scale deployed system would have this kind of open ended capability (see above). And I'm not at all convinced that this has been 'solved' even in the lab, so I'd be very curious to know which systems you think have this property.
Regarding hardware / software distinction for which is the toughest issue for robotics, my view here is deferring to Brett Adcock, the CEO of Figure AI, who recently said this:
"... the AI process for for training and deploying these policies is arguably ready today pending like really good functioning hardware and training sets "
(in this section of this interview: ruclips.net/video/RCAoEcAyUuo/видео.html )
He also has made the point that you made elsewhere about long ramp up time for actually building large numbers of robots, even if they were fully ready today. So, I completely agree with this point you've made, but I'm not sure that it pushes out the timelines of the impacts on society too far. But any delay to these impacts will be a good thing as it'll take us a long time to politically adjust to the new economy!
Thanks for all the comments, I really appreciate the being challenged on what I'm saying. Robust discussion is the best way to sharpen ideas!
I think he's being intentionally disingenuous. My prediction is that we will have expert level AI systems within the next couple of years. Sam knows this too, but doesn't want to say it in public. Keeping secrets is part of the job description for any CEO.
That may be true, it's hard to tell now that most AI companies have effectively gone back into stealth mode for their latest advances. But there are some big issues to solve, e.g. hallucinations, calculative reasoning and creatively reframing problems, and so it's not clear how easily those can be solved.
But it's exactly because of this uncertainty that I think we have to be planning for the future in terms of a range of scenarios.
They're certainly interesting times we're living through!
Thanks for the comment.
The analysis are all fundamentally flawed. First, the assumption that humans can be pushed into more and more complex tasks is fundamentally flawed - we do not have unlimited intelligence and half the people are below or at average. Second, the speed of AGI - does not assume implementation speed. If we have AGI NOW - it will not replace all human work, even ignoring robots, because it will not have the capacity to do so. We need more data centers and faster processors- this happens, but it will take time. It can easily take a decade or two from now to have the processing. Robots will soon be here - they already are close - but will rely on outsourced (data center) AI for complex interactions (as demonstrated). And robots scale worse than servers that can easily multitask, so it will take years to get them in numbers that are threatening.
Btw., AGI median human is "barely" university (50%) - and will not be there long. Chance is that we either overshoot from the tart, or within a generation or two (currently 18 months to 24 months each) new models leave it dead cold in the water. So far improvements have always been brutal - one brutal step makes a median human AGI into an ASI. Any idea that AGI to ASI takes 20 years ignores basically every research we currently have - and would need a major logical claim here to offset the ridiculous level of demonstrated progress.
Hi Thomas,
I think we agree on a lot here, so maybe some parts of the video weren't as clear as they should have been!
The original analysis was precisely around the idea that one option was that there was a limit to human intelligence, and so AGI would surpass humans in all jobs.
And I totally agree with the point you make about the material basis of all these AI systems taking time and money to implement. I did try to at least hint at this through the video, for example in my graph showing that the impacts may take longer than the point at which Median AGI arrives. I also talked about this point in my video a while ago about superintelligence: ruclips.net/video/VvVVO3SZn4I/видео.html
As for timescales, I think it is very hard to be confident at this point. There are some significant architectural improvements that need to be made before these AI tools are truly useful (such as fixing hallucinations; improving calculative reasoning and enabling very long running contexts that don't degrade in quality of work over time).
And many other exponential curves have turned out to be S curves. So, until we get there, we're not actually there yet. There are many serious minds in the field who see potential problems and limitations with the current approaches that might not get fixed soon.
As for the jump from Median AGI to Expert or super AI, I think this also has to do with whether or not there is a capitalist incentive for the return on investment. If the AI gets too much autonomy then it won't be a useful cog in someone else's capitalist venture. But, without enough autonomy it may not cross into fully 'expert' level AGI. We just don't know.
Indeed, in other videos I talk in more detail about why open ended, superintelligence, with what I call "meta level agency" is unlikely to be funded by capitalists or indeed governments. So I'd be curious if you think there is a flaw in the arguments presented there:
Avoiding dangerous super AI by regulating meta level agency | ChatGPT and Tesla FSD examples: ruclips.net/video/3stulpAp5tI/видео.html
and
Escaping Moloch's superintelligence trap (about the incentives around super AI): ruclips.net/video/3gKVmQLRbDc/видео.html
But my summary understanding of your point is that you think ASI is likely to happen faster than I was suggesting, but that the impacts on the job market will be slower than I was suggesting. I hope you are right, because it is the slow pace at which we are likely to manage the massive impacts on our political economy that I'm currently most worried about!
Thanks for the comment.