Automation in the past may have created more jobs than it destroyed, but AGI is a fundamentally different advance, if AGI can do anything a human can do, then by definition, any jobs it creates, it can do itself
Illogical ideologies and corruption are quite a big problem IMO. Moral AI alignment is impossible when the people training the system believe things like eating an unfertilized chicken egg is tantamount to murder but abo rting an unborn baby is totally fine. Allowing men and women to declare themselves the opposite sex and suddenly they are that sex. Evolutionary biology and a common single ancestor may have seemed legitimately possible 150 years ago but we didn’t have any idea how incredibly complex cellular replication is and we are still learning more about the processes involved today. Organic chemistry is extremely complicated and time is its enemy. The idea the whole universe is an unguided process and occurred accidentally by chance is preposterous. A language, DNA, doesn’t occur accidentally and information can only come from a mind. If we landed on Mars and found a bunch of rocks arranged into shapes that said, “Welcome to Mars Humans! Congratulations!” Nobody would ever claim that message to be an accidental chance occurrence.
I think governments are going to be left out of the loop. Charles Stross’ accelerando covers the obsolescence of government well. They just won’t be able to keep up with the pace of change.
When you reduce all this down to its simplest form, I think the most critical question is, "Will the super-wealthy individuals and corporations suddenly decide that humans are intrinsically valuable outside of their potential as docile workers?" If you think they will, you should ask, "Why have they not shown any sign of this to date?" I am unreasonably and fearfully hopeful that I am wrong about this.
They might not have a choice. Back in the days of the American frontier owners of large fur trading businesses were immensely wealthy and held great political sway. Nowadays hardly anyone wears fur. After the vast majority of businesses are rendered obsolete by AI, who will be left on top? To take it a step further, what if technological advances render the whole structure of our economy (including money as we know it) irrelevant? If there's no money, who will bribe the politicians? What if AI runs the government and there's nobody to bribe?
You forget that this is a unique time not like old classic days. I mean this, Take apple or google or Microsoft or any buisness, it needs customers. No customers all those firms go bankrupt. With no income beside the rich all those big firms that lean on quantity of customers would end. So the reason wont be charity, the reason would economic necessity. So an addapted tax system for robots that replace humans wich would make robots way less attractive to replace all jobs also. So as firms, small, medium or big need customers and society goverment needs tax else with what you go pay the police to keep order, or how you pay the militairy, or making roads, infrastructure? See enconomy cuts off pure egoism in the end else you wont simply have buisness or a functional society. There will rise addaptation on a way that economics still makes sense, never underestimate that all society and buisness are holisticly bound to eachother by means of economy and its necessity.
@@Jack0trades Well that's the question. Our sociopathic overlords talk about UBI, but printing money never ends well. They're not going to fund it out of their own pockets. Fortunately, real limits are coming up fast. Compute, energy, portability, offline, real limits. We're not getting fusion energy. And nobody wants robots with their head in a cloud.
I just want to add a reply that says the original comment was absolutely correct about the biggest question we face. Everyone seems to assume that the rich and powerful will give up all of their money, power, and social status to build a 100% automated abundant utopia for the billions of people on Earth. Why would anyone do that? It goes against the very nature of the people competing for power and social status. Letting go of a couple billion unnecessary people would make a lot more sense to the masters of our world.....
@@dg-ov4cf I agree and this is brilliant hahaha, but due to your spelling I have to hit you with the definition of that spelling: 1. imitative 2. relating to, characterized by, or exhibiting mimicry Which, now I actually realize weirdly kind of applies to memes (which I assumed you meant). The reason to develop shared symbols for concepts is probably due to us mimicking each other. It’s replicating behaviors. Copying ideas. Is this the etymological link to Dawkins’ concept and I completely missed the obvious? 😅
The explosion is happening right now, AI programming AI, designing AI chips, over half of the internet is turning into AI content. The biggest bottleneck really is what is pointed out here, how fast these capabilities rollout, the scientific aspect, the engineering and deployment of engineered systems like robotics, and how rapid it reaches the end consumer. At this point I think the biggest issue is how do we transition from where we are to what is coming, and I have a bad feeling governments may take knee-jerk reactions to restrict these technologies in favor of human labor/jobs.
@@LeonardoPisano-sn2lp We are close, most diagnostic software today is standard I/O. I know Tesla, Mercedes, BMW, and others are using adaptive input systems. These systems use visual, auditory, and sensor feedback on diagnosing issues with cars. Similar systems in farm equipment are in use and if the current information is useful to go by trains and other vehicles like boats are also integrating all of this. This is not something you will see in 50 years, this point you put forward is something that will hit within the next 30 months.
Why pay a human to do something when a robot can do it for half the cost or less. This is the thing that sticks in my head. Every corporation will be incredibly motivated to take every robot and ai worker they can get because it will cut costs. Corporations are designed to maximize profits and that’s what ai and robots are basically the best at. I can easily imagine that the corporations who save money by not having as many people will eventually be able to buy out or otherwise economically constrict their competitors until there are no human majority businesses left. I can only see this happening incredibly quickly. Like imagine what happens when somebody spends the time to generate an agent that is dedicated for business accounting. Why would anybody hire an accountant again. Same for when we make a robot which unerringly sorts mail. Guess what mail sorter is a dying job right now because we have lol. It really is a matter of how fast can they build the machines
I'm definitely replacing people with robots at my restaurant in the next few years. I could get a decent business loan if Trump goes back to the white house. I'll use that to buy the robots and make all the money back quickly to pay it off. I'll still have a few humans there to supervise the robots, but no more taking phone calls. I'll leave all the phone calls to the restaurant AI because I hate phone calls. I don't understand why people still call to order in 2024, just use the damn app and order online. You wouldn't believe how many people call to complain about how "difficult" it is to navigate the menu on the website or the app, even though it's extremely easy. Yeah, human stupidity is definitely a consistent constraint in all industries.
@@RyluRockyit will deem humanity entirely obsolete long-term. It’s not like ASI would stay subservient to us when we’ll me mere ants in comparison to it.
@@flickwtchru can see it for what it is when you try to engage in more nuanced discussion. But that can not be allowed. In truth if their stance was stable they would want to be sure it is as stable as possible and with minimal destructive influence by learning about areas that cause issue rather than ignore them or answer them by going deeper into the call for progress. In short, there are so many complex associated dependencies, culture , energy, geopolitics, other tech developments. The things that cause both resistance and also corruption of the trends of development into very very nasty outcomes. If the trends of progress can not truly be halted then we need to put as much effort into understanding the associated factors not core to the technical progression. Has not the longest study on evolution “ long term evolution study “ taught us nothing !?
Small correction: a straight line on a log-linear plot indicates exponential growth. With log scales on both axes, a straight line indicates a power law relationship rather than exponential growth. Having said that, visually the trend seems to be increasing faster than the plotted power-laws. It would be interesting to see these data in log-linear form...
For research, I think there are 3 pillars, AGI/ASI is only one. The second is quick to apply digital twins (advanced narrow AI like AlphaFold, AlphaProteo or GNoME) in which you can quickly do billions of experiments and stress test the best candidate solutions. These might also be built, trained and applied (AGI based design iteration in quick to apply digital twins) by AGI at some point, and we might apply the same approach e.g. for high energy physics AI simulators to test billions of designs for nuclear fusion, or for the human body to do virtual clinical trials. That’s the second. The third is finally robots and automation in the physical world, but I think with pillars 1 and 2 most of the heavy lifting can be done in digital space before entering the physical space.
I'm a bit sceptical about the whole argument: Sure, the INPUT (data/money/energy/petaflops) grows exponentially but I suspect we also perceive the output logarithmically - Change is perceived in orders of magnitude. Take money as an example. If you're poor (0-10k), getting a million is an incredible change. If you're rich (5m-20m), getting a million is nice. If you're super rich (100m plus) you mostly don't care about another million. Take learning a language as an example. Learning the first few words is exciting. But learning a new word after you know > 2k words is barely noticable.
At least corruption and ideological bias is taken care of and not a cancer in our governments and non government institutions. So we can definitely trust the people who are aligning and controlling these systems.
It will change the world more than the agricultural revolution, the Industrial Revolution, the tech/computer revolution and the internet all combined imo.
That’s how I see it too: we could create this digital God tomorrow and it would still take us a decade to get through the human “process”. The consolation here is that (hopefully) AGI powered gene therapy will give us longer and healthier years to our lives so the wait would be as excruciating :)
@@7TheWhiteWolf This is one of the places where I take issue with the vision of the future presented by Star Trek. Humans suck. I'm much more interested in a future with humans like the Illyrians, Voyager "Unity"-style ex-Borg (individual cyborgs with the capacity to collectivize or not at will), mental transfer to Soong-type robot bodies (as with Picard at the end of season one of ST:P), or Barclay's Cytherian-enhanced intelligence.
Most of my family doesn't believe me, when I explain to them that we'll be able to brute force Longevity Escape Velocity next decade thanks to AI assisted research simulations. I really hope and believe I'll be able to tell them I told you so. As I am acomaning them to their first Rejuvenation Treatment.
@@Michael-Humphrey Nope. It's a lot more profitable if everyone can do it. After all, once it starts they'll have a captive audience that'll will vote to for the their governments to pay for it.
@@Michael-Humphrey The problem is that it's too easy for backyard tinkerers and illegal black markets to utilize the tech that extreme life expansion would require. If the tests on mice and pigs have been any indicator, immortality is as simple to mass produce as an MRNA vaccine, which can be done by blackmarkets. Honestly, it would be more profitable for the government to just mass produce it themselves, and save on healthcare costs while gaining political favor so they can maintain power.
that's a real possibility but unlikely given human nature and the current state of our world.... jus sayin the oligarch who own everything will never let it happen.
@@vi6ddarkking The initial cost of rejuvenation is going to be expensive I highly doubt they will give that kind of benefit to everyone better get rich fast
dude, youre fn nuts. do you know how many python scripts i wrote this week? I dont Code and I just learned about power four days ago. I couldn't do nothing without AI. This is a day dreamer's paradise. "The sleeper has awakened."
You said Sonnet was better than o1 in most cases. Are you able to produce 1000+ line single-file Python apps with interactive GUIs (100% bug free) in one shot with sonnet..? Or 1,800-2,000 lines of modularized apps with 2-4 shots..? I know you mentioned usage regarding information (not coding). However, I find that most people who complain about o1 (irrespective of domain) are improperly prompting it. Because it requires a novel prompt engineering paradigm My experience, across the board (any domain), is o1 absolutely obliterates anything I’ve ever used before by an order of magnitude at least
I've had paid subscriptions to both ChatGPT and Claude (through Perplexity) for over a year. In fact, it was explicitly because of Shapiro's recommendation that I subscribed to Claude. For many months during that time, in my job as a senior developer and software consultant, I gave those two LLMs the same prompt, side-by-side, and compared the results. Claude/Sonnet has rarely given me usable (C#/JavaScript) code: For code-writing purposes, it's been literally useless to me (though it performs much better on non-coding tasks). ChatGPT, by contrast, is absolutely brilliant: All I have to do is describe what I want from it (as if I was writing business-level requirements for a junior dev to implement), and it produces code so beautiful (in both structure and style) that, more often than not, I don't need to change anything (except to remove excessive commenting). From the sound of it, Shapiro was always more of a script-writer than an engineer, so it's very unlikely he'll ever be in a position to use any LLM to its full potential in IT. FYI, Claude/Sonnet has been so useless to me (even outside of coding) that I finally switched my Perplexity model to their default. I truly haven't noticed the difference.
my experience matches what I think your point is. o1 and o1 mini outstrip all other models for code. its not perfect by any means - but those of us who use AI to code saw the 10x improvement in speed and careful thought and as soon as its context can fit large projects or have some sort of rag index, omni is going to be amazing.
Hey David, thanks for featuring our research on compute scaling. We (Epoch) have some studies on research impact and AI innovations in the pipeline, stay tuned!
I assume call center jobs will be nearly obsolete in the next 5 years. The fact that the new ChatGPT advanced voice model is lifelike, it’s not hard to see
You underestimate what superintelligence means, by a lot. Superintelligence invents all materials needed and probably solves physics theoretically without LHC.
@@ikoukas 4.76% chance by 2030. I defined ASI as 2000+ IQ, no context limitations, and full-spectrum modality. Also waved my hands and declared a 10 exaflop holographic computer with fractal memory, etc.
Here's your bite-sized briefing: 1. Stay informed about the latest advancements in AI to navigate the future effectively. 2. Embrace hard work and resourcefulness, as these qualities remain crucial in an AI-driven world. 3. Commit to lifelong learning and adaptability to thrive in a rapidly evolving job market. 4. Explore resources like Epoch AI Research and Sam Altman's blog to deepen your understanding of AI. 5. Approach the intelligence explosion with optimism, viewing it as a chance to unlock human potential and build a brighter future.
Once we have robots, scaling up manufacturing and integration capacity in the physical world might scale faster. The robots can built the robots, one makes 2, 2 make 4, 4 make 8, 8 make 16… (after 30 steps you are at a billion). Of course it’s more then assembly, but other machines and production lines can be built by AGI/ASI infused robots, ores can be mined and melted by robots. So essentially it may come down to 1, 2, 4, 8, 16 … And should AGI/ASI operating in quick to apply digital twins of chemistry figure out self replicating nanotechnology, or fusion in digital twins of high energy physics (I made an other comment for more details on that) , that would be yet another sorry.
You cannot measure the evolution of a technology on the basis of the cost inputs into the system.. A comparison on output quality should be the indicator i think
David superintelligence won't need much time to build nuclear fusion reactor because they will build all of this differently and energy, matters all constraints will be compress to an overwhelming point.They will get priorities to obtain the supply of resources and raw materials, and it's gonna be wild to see.
Thanks for this! I haven't finished watching yet, but I wanted to questioningly rebut the idea that (e.g.) goals are an anthropomorphic projection. Doesn't goal acquisition stand as a base for self improvement? Certainly we can predict some objectives from autonomous lower agents (the mouse is starved so its objective will prob. be food acquisition before mating or sleeping); we can predict fewer objectives from anthropomorphic agents (the person is starving but if its objective is to lose weight he may not run a maze for a burger); can we predict the objectives of a higher autonomous agent (...)?
He mentions that there is no reason to assume an AI will have an objective of its own. So my own take from that is that It's anthropomorphic to believe that it is has self interest or even a concept of "self" or "being" as an individual or concept that has to survive or pursue a goal without us giving it one. The fact that such an idea might emerge by itself is purely speculative and not based in what facts we currently have.
Great points! The only thing I see missing is that it might be a mistake to think that intelligence is not always the greatest constraint. That is, if you were talking about superintelligence. If you asked an ape what it's biggest constraint might be, it might give you a very bad answer. Similarly, we cannot guess what superintelligence may or may not be able to do. It will always be many steps ahead of us because it will see reality more for what it really is and be able to manipulate it better than we can. It may also come to the conclusion that intelligence will continue to be the greatest constraint for X. Love the channel. Cheers!
Why? No way you'll end up smarter than the system making the brain implant. Intelligence is now a losing game. The smarter you are, the more you'll realize it doesn't matter anymore. Better off getting your teeth capped for that brilliant smile :)
Thanks for sharing! One comment, listening to you talk about information, data, and charts while viewing another chart which is similar but different is counterproductive.
I have seen O1 do graduate level physics problems correctly. It takes it a few minutes to solve problems that a human graduate student would take hours to a day to solve. If a preview version can do this I wonder what the full version can already do?
Why wasn’t voice mode enabled with cameras? It’s completely nerfed without being available to internet. It’s too heavy on compute with cameras. Need serious hardware, cloud computing development?
Yep, the compute isn’t There yet in terms of being able to handle the servers either, hence why they’ve given even if you’re a paying member 30 minute of use time per day. Software is advancing faster than hardware at the moment
"Even the concept of goals is human". Wrong. Animals have goals. Cells have goals. Plants have goals. I really don't know HOW you can be so anthropocentric to think that we are the only species or mind architecture able to direct behavior to an aim
Individual cells and plants have behaviors, but I wouldn't call them goals. I'd suppose that a goal requires intention, and intention isn't a characteristic we would associate with a plant or cell. It might be more accurate to say that minds that are capable of intention can have goals. Many complex animals appear to be capable of intention, and many simple animals (and other complex systems) either are not, or do not have it in a way we can recognize at this time. Ultimately, I think our current understanding of consciousness (and the technical vocabulary associated with the field) is insufficiently advanced to justify a strong stance on more than very broad ideas about it. If we don't manage to kill ourselves off the future empirical science of consciousness is bound to be endlessly fascinating.
Interesting video. The law of unintended consequences tells me that we have no idea what is about to happen. I am certain however there is no utopia on the horizon. People making these declarations are smart computer nerds poorly equipped to Devine the future.
@@dankmemes7658 you're stuck in 1800. I think you need an ecology course plus a read of Dennett. Humans are animals and are "natural", everything is "natural," meaning everything is part of a complex system made of chemistry, physics and things that happen. Humans are information processors based on DNA and the cell. The DNA (and RNA maintaing it operational) is literally your code, and it sets your main drive to pass your genes, then the environment selects what is more functional for that aim. Any entity based on the DNA has ultimately the same goal and any strategy we found, from group hunting of the whales to forex trading, is ultimately directed to keep us alive as long as possible to reproduce. Of course you need to see this on a systemic way, with an eagle eye, and not consider the single individuals that might reproduce or not. That said, what is a drive? We can define it as a directed behavior to an aim, normally including a solution to a problem. Humans and non-humans both invent solutions to solve their problems, and get what they seek. Humans, for a lot of factors occurring together, have just brought it to a very sophisticated level of abstraction. But why do you think we created complex societies? Ultimately to fulfill our needs to eat, stay safe, stay warm, which all lead to successfully reproduce. Curiosity, fear of loneliness, philosophy, technology, love, are all devices to the same aim, the same drive as any other entity: keep on existing.
@@dave7038 this is very arbitrary. Assigning "intent" to "complex animals" (what is intent if not a narrative over a process happening in our brain which is ultimately a drive? What is complex?) but excluding "other complex systems" (why?). Seems the classic attempt of humans to compare what they see to themselves, and if it's similar enough then they start to ADMIT it can have higher order processes, otherwise it's just dismissed as "not having X". We shouldn't "associate characteristics" based on our fears and hopes and foresight or lack thereof. We should be much more scientific and open minded. And there's a massive amount of scientific evidence that animals -even so-called "simple" insects make choices, reason, have preferences and personality traits. Also in the video he talked about having goals, not "intentions", volition or consciousness. And said only humans have goals. That for a computer scientist or an ecologist or a cognitive scientist is just laughable.
Hey David. Have you been thinking about the problem that intelligence could be a problem for many humans? I mean that you need to be more intelligent to grasp or "survive" in the workforce because its getting more and more complex. I dont know if this is phrased corrctly.
Oh that’s very interesting (and neglected) aspect of the near future work force. Jordan Peterson (before he went hard on the paint) talked about how a crazy percentage of the population have an IQ score that’s too low for the US ARMY to accept. It’s a real thing. Which begs the question, are we going to have to dumb future jobs down intentionally so that more people could participate? Or do we just say screw it and let AI do it all? But yeah it’s a valid point that you bring. I’ve witnessed first hand how the company I work for have gradually restricted who they hire over the last 5 years…they no longer accept someone who doesn’t have a college degree or attending college. No more older hires as well, they’re going after “kids” in their early 20s
Yep, we're already at the point where people below a certain cognitive threshold are effectively useless in terms of their value as a "worker." I think that the latest language models raise this threshold from somewhere around 80IQ to somewhere around 100IQ. I imagine next year we'll see news of substantive business success stories around the idea of wholesale replacing business functions with agentic systems. By that time the bar will be even higher...
I'm excited and after I've lost my job and house (which fell by 50% in value) and got back on my feet I will enjoy the utopia. Even with the thought of that I'm excited!
The universe is a sequence of convergences. Particles don't exist until the wave function collapses. And society is converging on an environment where there will be far fewer humans and power will be concentrated in the hands of a few who have everything they need and want. Technology is evolving to accommodate.
The largest constraints for this new technology will be the initial need for new infrastructure and the logistics around deploying it. Once superintelligence is comfortably integrated within our socio-economic systems the sky is the limit. On our own we don't come up with major paradigm shift technological advances (think printing press, electricity, computers) all together that often. There's no telling how quickly advances of that magnitude or greater will be invented by a superintelligence. Not to be too hyperbolic, but material constraints are no longer a problem if you have star trek style replicators, energy constraints are no longer a problem with nuclear fusion, labor constraints are no longer a problem with super advanced robotics, etc etc. Or this could all be a bubble and we never get superintelligence The point is though is that we have to think outside our usual box if we're talking about something that is genuinely smarter than us.
Totally agree on the materials/time issue. People seem to think we’re gonna be living in Star Trek the day after agi, and that couldn’t be further from the truth
The real unspoken danger of the AGI-ASI is not the loss of jobs. The loss of jobs should probably be the lesser of our concerns. Our new jobs will be in the infantry putting chips into all military vessels known to mankind
It might be wishful thinking but AGI/ASI enabled militaries are extremely surgical by definition which would make conflicts more crippling to all parties involved which may or may not make modern (near future) warfare simply uneconomical to engage in. Hopefully
"The concept of goals is a human centric concept" Not if you redefine it. A process that results in the world being in some state can be said to act *as if* it has the goal of reaching that state. The result is effectively the same as having that goal and achieving it. It has an 'effective goal' if you like. A thermostat can be said to have the effective goal of regulating the temperature (the 'intentions' of the system are hardwired by an engineer into the system, a bug in the system could be said to change it's effective goal). An LLM can be said to have the effective goal of outputting a plausible imitation of an example of the training data if that example had started with the current context tokens. Given some specific context and more specific training examples you could say it has a narrower, more specific goal (the 'intentions' of the system are learned from training examples and steered further by the context).
The thing about LHC is that alot of the tech had to be developed on the fly, and better tech became available to them changed so much throughout the process so that by the time it was built objectives had stretched and mission creep took place but you just ended up with something that looked the same at the end but that was so much more. To contrast look at what happened with DNA sequencing.
@@NeroDefogger Is that supposed to be a critique? It's straight lines on a logarithmic graph of compute towards ai over time. What do you think Moores law is?
@@WhatIsRealAnymore The rich need us to be consumers of goods that we have enough money to buy. Cars used to be for the super rich only, now they are ubiquitous. The price of tech has fallen dramatically. The first basic 4 function hand held calculators cost $80 in the 70s, then you could buy a credit card sized calculator for$1 in the mid 90s and now you don't need to buy a calculator at all - it's one of the things your phone does, amongst its myriad other functions. In summary; the rich make their money by selling to mass markets. T'was always thus.
@@devonhurd7013 you're making us some assumptions and I'm just using general terminology when I talk... You know sometimes we just use casual language... I studied computer science in college but I was a minor not a major. I'm trying to get involved
So we just jumped up 2 years for AGI from 2027 to 2025? Just by what happened with OpenAI recent update? I thought for sure it was tracking for 2027. If that is the case, then surely putting a time line on Ai at all is premature until we understand the progress better. I'm new to Ai so please educate me if I am missing something.
I love that you are experimenting with your format in the open and encourage you to keep doing so. But the rotating / looping images as your Video are driving me crazy 😅 I would love to see the video match the audio again. The graphs are cool. But it is super frustrating seeing the same graph again and again. I groked them the first time 😉 🖖 ❤
I would like to push back on the idea that is still going to take as much time as you think. There's a good possibility that I have misunderstood your point of view but if I have not and I understand things clearly, then I would disagree and I would use the actual explosion we've seen inside robotics due to artificial intelligence in such a short time and comparison to the 10 plus years of Boston Dynamics robots we've watched. I would argue that that rapid progress is an indication of the kind of possible progress that could be achieved in multiple different sectors of science and engineering.
THIS IS NOT MY OPINION, THIS IS FROM OUR CORP LAWYER TEAM. We are a 600B company. The only error possible here is my retelling and understanding of what I was told. The CXO execs are beholden to the board. The Board is legally required to direct the exec to take the actions that return the best profits NOT to maximise employment. If the Board does not do this (as is their charter) then the shareholders can sue them in to the ground. Therefore if AI makes more money, the execs and board are legally compelled to do so, and also fearful of legal repercussions if they do not. So, the ONLY thing the board\execs can do to decel is to 'wait for more evidence'. That exactly where we are right now.
I think you’re underestimating how much AGI (or really, ASI) can help with navigating many other constraints, eg raw materials -> R&D on new materials. R&D cycles at digital time-scales may be unfathomable
1:30 None of those statement were promises but more so confident speculation on what our world would be like in an age where AI in general continues to develop, there’s a reason this isn’t available on OpenAI’s website.
So, progress may seem to slow down because AI training is taking longer which means new large models are released more infrequently, but the actual progress is still going just as fast as before.
We won't ultimately need to massively increase power generation. That will only be necessary so long as we continue to use software neural networks running on silicon, an extremely wasteful process. In time, we'll likely develop practical neuromorphic processors that will be far more efficient. Einstein was pretty smart and his intelligence was powered by less than 20 watts. Maybe we'll never reach that level of efficiency but there's a heck of a lot of space between that and where we are now.
How much biomass will be necessary to do that scenario? all the things will be rebuilded, how much land if the moore law is decreasing to build new factories, scale datacenters, the computation required will be unlimited, there are a lot of resources constrains, maybe the AGI is tecnically right the corner, but access economically is not so good, its pull all the others industrial facilities to the sky, including environmental risks.
We can barely fully understand our own intelligence. Given the current state of measuring AI with benchmarks, I doubt we would even be able to tell if an AI is generally smarter than a human... AIs are already better than humans at numerous tasks. But coming up with an AI that is generally smarter than humans is something we have no way to currently measure. Short of being able to interact with it like a human and having it produce consitent brilliant work exceeding most or knowledge workers. With a clear ability to either avoid or recognize its mistakes more clearly than we can.
As Elon Musk put it, work will become more like a hobby and optional. I think that’s the ideal end state, not entirely gone if wanted but at the same time no pressure if unwanted. Plus its nature will change, it mights seem more like a hobby for us
That is great news for the rich BUT bad news for everyone else ! The moment those who control capital and corporations can produce the goods and services they need without the help of the middle class and the poor, what makes you think they will share the proceeds ?! If robots can provide the rich with security, work on their farms, cook their food, clean their houses, make their clothes etc, everyone else would be doomed ! ! Anyone one else who doesn't own any significant assets (land, houses, factories, patents, etc) would be rendered useless. And the vast majority us belong to that category.
@@StefanMoises well the production side is exactly what might explode once it is fully automated, including the production of food. For one prices would drop significantly, right now human work makes up a large portion of end product prices. Regarding buying capabilities, once it becomes obvious there will be political pressure to solve this which might come in different solutions. UBI (Universal Basic income) and UBS (universal basic services) might be one solutions, although this would make one quite reliant on government as key distributor. Alternatively people might own and freely manage portions of productivity (robots, AGI, …) in which case you would still have a capitalist system in which people decide what their productivity is aimed towards (without the need to work themself, they would just supervise their portion of the automated economy. They might even direct their oversight over their share to an AGI to manage for it, if they don’t want to do it themselves. Essentially it’s an allocation problem and a decision whether we prefer central oversight or a decentralized capitalist system. Individual work as we understand it would not be required in any case.
Automation and energy reduced jobs. Imagine the number of workers needed to replace a simple farming tractor. A plc replacing an elevator operator. People still have jobs, New type of jobs. But we have a lot more free Time. So maybe there is less work to do than in the past.
🎯 Key points for quick navigation: 00:14 *📈 Sam Altman's Blog Analysis* - Analysis of Sam Altman's recent blog post on AI advancements. - Concrete predictions include AI as personal assistants and personalized education. - Emphasis on slow job changes and the need for extensive AI infrastructure. 03:40 *📊 Epoch AI Research Insights* - Analysis of Epoch AI's research on AI training compute trends. - Doubling trends in training compute, training costs, and data for AI. - Observations on the faster scaling of language models compared to vision models. 08:52 *🌐 Economic and Scientific Constraints* - Examination of economic and scientific constraints beyond intelligence. - Focus on material, energy, time, and space limitations in scientific endeavors. - Discussion on the broader implications of AI scaling in various sectors. Made with HARPA AI
Automation has always added more jobs in the past. However, there is a lag between the destruction of the job and the creation of new ones. And as is always the case, this time might be different. The frontier of automation may move faster than the lag.
12:50 ya, energy always the floor level limitation. And my main concern, what percentage of permanent unemployment will create that social support system? 10%? 20%? I just hope it’s sooner than later to prevent so much unnecessary suffering over the transition.
is there a way to send like a request for a type of video? i ask because I was wondering if you could make a video about ways to live longer that are arising or already in development like the mitochondria restoration in japan. but I was wondering if you have any others that you know of, have research on or anything that you could make a video on, just grouping everything in one video
Intelligence is going up rapidly. The next big limiting constraints are labour and raw materials. The trajectory of the price of labour towards 0 (enabled by robotics) will help alleviate those constraints and unlock an acceleration away from entropy at least whilst matter and energy still exist.
Broadly agree with the broad view on constraints… and for science „intelligence“ is far from the main constraint. We already have vast intelligence available in the living human population, and those of us involved in research can only rarely get the funding and other support needed to follow, develop and apply our ideas, no matter how motivated we are. I doubt an AI that is not self-motivated and switched on can compete with a motivated, switched on human.
These are the videos that you excell at. I really like the videos where you basically do the Dr. Nick Bostrom work. It's true that your predictions might not be 100% accurate but at least you show an informed scenario.
11:09 "Electricity was scary when it came out." “I couldn’t have electricity in the house. I couldn’t sleep a wink. All those vapours seeping about.” -- Countess Violet Grantham ;->
Not sure that automation created net more jobs. The number of jobs might have gone up (even that has to be checked/verified) but I would like to see how that number compares to the population growth. If the job growth was +3% vs +5% population growth than we still lost jobs - since without automation all jobs need to be done by people. More important though: Do we want people to do all the jobs? The majority of people work to sustain themselves. There's quite a disparity between what people do for money and what they do for other motivation (like free work in the community etc), and at least for Germany over 60% of societal important work is unpaid free work on voluntary basis (based on hours worked).
Every gain in life carries a cost. For every achievement, something must be sacrificed, and even in success, there is always a trade-off. If AI were to solve all human problems, the trade-off might be the very things that make us human-our need for purpose, independence, and emotional connection. In exchange for a world of efficiency, stability, and optimization, we might lose our capacity for creativity, resilience, and the unique struggles that shape our individual and collective identities. Keep cheering for utopia...
@@daniellivingstone7759I didn’t know being religious is a disqualifying factor. Fire, agriculture, the printing press and germ theory, etc were all brought up by deeply religious people. I’m not a religious person myself but he might have a point: agriculture didn’t necessarily make us happier (at least according to historians like Yuval Noah Harari) and it definitely brought about the institution of monarchy, slavery and organized warfare. Germ theory did make us live a longer healthier lives but at a cost as well. One way op could be wrong is that if the future AGI/ASI is achieved, we’ll arguably be able to expand infinitely into the solar system where, arguably, like minded people could build the societies they desire the most in terms of privacy, goals, faith (or lack thereof), level of tech allowed, etc. Earth is as small as it’s big, you know.
@@theWACKIIRAQI I agree with what you are saying but gains do not always breed costs. The ultimate gain will be life extension by hundreds of years. This will only breed a cost if it precipitates large population increases. If AGI mitigates these by somehow discouraging reproduction and making people happy with such choices by advanced manipulation of brain chemistry or neuromodulation then there is no equivalent loss unless you believe it is God’s will that humans should nit alter their biology and multiply.
Curious if you will address the AGI in 2025 next year if it's not here. I am watching this space long enough to have first "AGI next year" claims made and past 2024 was big year for these kind of ideas in late 22 and throughout 23
Sam and others like him are speaking about the future from their shoes, shoes that will get wealthier as this AI train keeps rolling. For the rest of us outside watching, it will be a mess, maybe one of the biggest messes in adopting new tech in humanity's life. These large companies are like boulders rolling down a cliff, once they pick up speed it's nearly impossible to stop them, so they will replace humans, and become bigger and faster while normal everyday people suffer. Just look at the world we live in, we here in the United States have a large amount of homelessness, but at the same time, our politicians are banked billions. We have the money and power to stop that right now, but our government doesn't. Not sure what would make anyone think that will change unless it's to get worse. I am all for AI, I know it will and can do amazing things, but I seriously doubt we average people will get too much if any part of the benefits it will bring. For any real progress to happen, a lot will need to be changed by people who frankly don't care about anything or anyone unless its money and power.
I see a slight mistake in a key assumption here. Most constraints are the product of our cognitive limitation. It is typical of very smart people to find far easier ways than most to achieve a given goal. Super intelligence will be able to dramatically reduce or even side step most constraints by figuring orders of magnitude more efficient approaches. As just one example the expensive race to build arrays of nuclear reactors will be cut short if the first thing ASI does is to unlock compute energy efficiency in the ballpark of biological brains.
On the ”Automation Cliff” : Human Curiosity and Creativity is boundless and Infinite. Just look at where it got us! THat-in itself is an Assurance that the AC is an intellectual abberation based on a foundational underestimation of Human Ingenuousity. End of rant.
Thx. Well analyzed. Numbers are sometimes scary .We however are Humans with Great Capacities. In Short: As Machine get faster, time to Adjust get smaller. Meaning: creating the next best Cooperation amoung All Nations, laying the Differences aside in Order to present the best in us with higher Accuracy & Precision to overcome , what ever the Barrier might be. We did. a Great Job in the Past,…doing it Now & keep going Strongly for the Time to Come.
Actually- on the scaling speed difference between language and vision - totally logical as from McLuhan’s ”Xtensns of Man” postulate following can be inferred: speech=1D, vision=2D, agency=3D, sex=4D, (so of course vision will consume more compute than vision, probably slowing down scaling?)
I think the automation cliff is when there aren't humans who can fill new jobs left when AI automates nearly everything because most people are of average intelligence and the last few jobs to be automated will likely be for extremely intelligent people with extremely high levels of experience in their fields that normal people can't be hired to do.
Automation in the past may have created more jobs than it destroyed, but AGI is a fundamentally different advance, if AGI can do anything a human can do, then by definition, any jobs it creates, it can do itself
You still need humans to do physical work.
@@ljre3397 sure, until AGI puts itself into a robot that can do anything a human can physically do
@@ljre3397not up to date on advances are you.
When the time comes that AI can do any job, then having a job won’t really be all that important or necessary.
@@NeroDefogger do you understand what a logarithmic scale is? a straight line on a log graph is an exponential growth....
The only constraint is 'petty politics' and 'human stupidity'
Illogical ideologies and corruption are quite a big problem IMO. Moral AI alignment is impossible when the people training the system believe things like eating an unfertilized chicken egg is tantamount to murder but abo rting an unborn baby is totally fine. Allowing men and women to declare themselves the opposite sex and suddenly they are that sex. Evolutionary biology and a common single ancestor may have seemed legitimately possible 150 years ago but we didn’t have any idea how incredibly complex cellular replication is and we are still learning more about the processes involved today. Organic chemistry is extremely complicated and time is its enemy. The idea the whole universe is an unguided process and occurred accidentally by chance is preposterous. A language, DNA, doesn’t occur accidentally and information can only come from a mind.
If we landed on Mars and found a bunch of rocks arranged into shapes that said, “Welcome to Mars Humans! Congratulations!” Nobody would ever claim that message to be an accidental chance occurrence.
I think governments are going to be left out of the loop. Charles Stross’ accelerando covers the obsolescence of government well. They just won’t be able to keep up with the pace of change.
When you reduce all this down to its simplest form, I think the most critical question is, "Will the super-wealthy individuals and corporations suddenly decide that humans are intrinsically valuable outside of their potential as docile workers?" If you think they will, you should ask, "Why have they not shown any sign of this to date?"
I am unreasonably and fearfully hopeful that I am wrong about this.
What does the average person think? That is what matters most.
They might not have a choice. Back in the days of the American frontier owners of large fur trading businesses were immensely wealthy and held great political sway. Nowadays hardly anyone wears fur. After the vast majority of businesses are rendered obsolete by AI, who will be left on top? To take it a step further, what if technological advances render the whole structure of our economy (including money as we know it) irrelevant? If there's no money, who will bribe the politicians? What if AI runs the government and there's nobody to bribe?
You forget that this is a unique time not like old classic days. I mean this, Take apple or google or Microsoft or any buisness, it needs customers. No customers all those firms go bankrupt. With no income beside the rich all those big firms that lean on quantity of customers would end. So the reason wont be charity, the reason would economic necessity. So an addapted tax system for robots that replace humans wich would make robots way less attractive to replace all jobs also. So as firms, small, medium or big need customers and society goverment needs tax else with what you go pay the police to keep order, or how you pay the militairy, or making roads, infrastructure? See enconomy cuts off pure egoism in the end else you wont simply have buisness or a functional society. There will rise addaptation on a way that economics still makes sense, never underestimate that all society and buisness are holisticly bound to eachother by means of economy and its necessity.
@@Jack0trades Well that's the question. Our sociopathic overlords talk about UBI, but printing money never ends well. They're not going to fund it out of their own pockets. Fortunately, real limits are coming up fast. Compute, energy, portability, offline, real limits. We're not getting fusion energy. And nobody wants robots with their head in a cloud.
I just want to add a reply that says the original comment was absolutely correct about the biggest question we face. Everyone seems to assume that the rich and powerful will give up all of their money, power, and social status to build a 100% automated abundant utopia for the billions of people on Earth. Why would anyone do that? It goes against the very nature of the people competing for power and social status. Letting go of a couple billion unnecessary people would make a lot more sense to the masters of our world.....
the agricultural revolution was the singularity tbh
😂💯
And tools being the ‘spark’
@@dg-ov4cf I agree and this is brilliant hahaha, but due to your spelling I have to hit you with the definition of that spelling: 1. imitative 2. relating to, characterized by, or exhibiting mimicry
Which, now I actually realize weirdly kind of applies to memes (which I assumed you meant). The reason to develop shared symbols for concepts is probably due to us mimicking each other. It’s replicating behaviors. Copying ideas. Is this the etymological link to Dawkins’ concept and I completely missed the obvious? 😅
was the beginning for sure. needed to unlock the build tree to get more pop
No. The Human Instrumentality Project was...
The explosion is happening right now, AI programming AI, designing AI chips, over half of the internet is turning into AI content. The biggest bottleneck really is what is pointed out here, how fast these capabilities rollout, the scientific aspect, the engineering and deployment of engineered systems like robotics, and how rapid it reaches the end consumer. At this point I think the biggest issue is how do we transition from where we are to what is coming, and I have a bad feeling governments may take knee-jerk reactions to restrict these technologies in favor of human labor/jobs.
They will. Govt will butcher anything they can leverage as propaganda, AI is no exception sadly.
When a robot can troubleshoot and fix anything on a car that's when I'll care
@@LeonardoPisano-sn2lp We are close, most diagnostic software today is standard I/O. I know Tesla, Mercedes, BMW, and others are using adaptive input systems. These systems use visual, auditory, and sensor feedback on diagnosing issues with cars. Similar systems in farm equipment are in use and if the current information is useful to go by trains and other vehicles like boats are also integrating all of this. This is not something you will see in 50 years, this point you put forward is something that will hit within the next 30 months.
We just can hope multiple orgs get to the same point, they can’t censor everyone or every country
@@hypebeast5686 Bill Clinton said the same about the internet back in the 90s and even in the west it is highly censored and controlled.
Why pay a human to do something when a robot can do it for half the cost or less. This is the thing that sticks in my head. Every corporation will be incredibly motivated to take every robot and ai worker they can get because it will cut costs.
Corporations are designed to maximize profits and that’s what ai and robots are basically the best at. I can easily imagine that the corporations who save money by not having as many people will eventually be able to buy out or otherwise economically constrict their competitors until there are no human majority businesses left.
I can only see this happening incredibly quickly. Like imagine what happens when somebody spends the time to generate an agent that is dedicated for business accounting. Why would anybody hire an accountant again. Same for when we make a robot which unerringly sorts mail. Guess what mail sorter is a dying job right now because we have lol. It really is a matter of how fast can they build the machines
AI will be terrible for humanity short term, amazing long term.
I'm definitely replacing people with robots at my restaurant in the next few years. I could get a decent business loan if Trump goes back to the white house. I'll use that to buy the robots and make all the money back quickly to pay it off. I'll still have a few humans there to supervise the robots, but no more taking phone calls. I'll leave all the phone calls to the restaurant AI because I hate phone calls. I don't understand why people still call to order in 2024, just use the damn app and order online. You wouldn't believe how many people call to complain about how "difficult" it is to navigate the menu on the website or the app, even though it's extremely easy. Yeah, human stupidity is definitely a consistent constraint in all industries.
@@RyluRockyit will deem humanity entirely obsolete long-term. It’s not like ASI would stay subservient to us when we’ll me mere ants in comparison to it.
@@flickwtchrit’s definitely smelling like a faith based self reference argument 🤷🏽♂️… but like string theory 😂 need bigger colliders
@@flickwtchru can see it for what it is when you try to engage in more nuanced discussion. But that can not be allowed. In truth if their stance was stable they would want to be sure it is as stable as possible and with minimal destructive influence by learning about areas that cause issue rather than ignore them or answer them by going deeper into the call for progress.
In short, there are so many complex associated dependencies, culture , energy, geopolitics, other tech developments. The things that cause both resistance and also corruption of the trends of development into very very nasty outcomes.
If the trends of progress can not truly be halted then we need to put as much effort into understanding the associated factors not core to the technical progression.
Has not the longest study on evolution “ long term evolution study “ taught us nothing !?
Small correction: a straight line on a log-linear plot indicates exponential growth. With log scales on both axes, a straight line indicates a power law relationship rather than exponential growth. Having said that, visually the trend seems to be increasing faster than the plotted power-laws. It would be interesting to see these data in log-linear form...
For research, I think there are 3 pillars, AGI/ASI is only one. The second is quick to apply digital twins (advanced narrow AI like AlphaFold, AlphaProteo or GNoME) in which you can quickly do billions of experiments and stress test the best candidate solutions. These might also be built, trained and applied (AGI based design iteration in quick to apply digital twins) by AGI at some point, and we might apply the same approach e.g. for high energy physics AI simulators to test billions of designs for nuclear fusion, or for the human body to do virtual clinical trials. That’s the second. The third is finally robots and automation in the physical world, but I think with pillars 1 and 2 most of the heavy lifting can be done in digital space before entering the physical space.
I'm a bit sceptical about the whole argument: Sure, the INPUT (data/money/energy/petaflops) grows exponentially but I suspect we also perceive the output logarithmically - Change is perceived in orders of magnitude.
Take money as an example. If you're poor (0-10k), getting a million is an incredible change. If you're rich (5m-20m), getting a million is nice. If you're super rich (100m plus) you mostly don't care about another million.
Take learning a language as an example. Learning the first few words is exciting. But learning a new word after you know > 2k words is barely noticable.
At some point the effect will feel infinite I would say because through technology we gain the ability to experience anything.
AI is going to a very painful technology to adapt to. It will change our world more than the internet itself has.
At least corruption and ideological bias is taken care of and not a cancer in our governments and non government institutions. So we can definitely trust the people who are aligning and controlling these systems.
It will change the world more than the agricultural revolution, the Industrial Revolution, the tech/computer revolution and the internet all combined imo.
That’s how I see it too: we could create this digital God tomorrow and it would still take us a decade to get through the human “process”.
The consolation here is that (hopefully) AGI powered gene therapy will give us longer and healthier years to our lives so the wait would be as excruciating :)
I don’t think humans will be able to adapt to it, you’ll have to merge with it transhuman wise.
@@7TheWhiteWolf This is one of the places where I take issue with the vision of the future presented by Star Trek. Humans suck. I'm much more interested in a future with humans like the Illyrians, Voyager "Unity"-style ex-Borg (individual cyborgs with the capacity to collectivize or not at will), mental transfer to Soong-type robot bodies (as with Picard at the end of season one of ST:P), or Barclay's Cytherian-enhanced intelligence.
Most of my family doesn't believe me, when I explain to them that we'll be able to brute force Longevity Escape Velocity next decade thanks to AI assisted research simulations.
I really hope and believe I'll be able to tell them I told you so.
As I am acomaning them to their first Rejuvenation Treatment.
No my friend the rich will be able to do so
@@Michael-Humphrey Nope. It's a lot more profitable if everyone can do it.
After all, once it starts they'll have a captive audience that'll will vote to for the their governments to pay for it.
@@Michael-Humphrey The problem is that it's too easy for backyard tinkerers and illegal black markets to utilize the tech that extreme life expansion would require. If the tests on mice and pigs have been any indicator, immortality is as simple to mass produce as an MRNA vaccine, which can be done by blackmarkets. Honestly, it would be more profitable for the government to just mass produce it themselves, and save on healthcare costs while gaining political favor so they can maintain power.
that's a real possibility but unlikely given human nature and the current state of our world.... jus sayin the oligarch who own everything will never let it happen.
@@vi6ddarkking The initial cost of rejuvenation is going to be expensive I highly doubt they will give that kind of benefit to everyone better get rich fast
dude, youre fn nuts. do you know how many python scripts i wrote this week? I dont Code and I just learned about power four days ago. I couldn't do nothing without AI. This is a day dreamer's paradise. "The sleeper has awakened."
You said Sonnet was better than o1 in most cases. Are you able to produce 1000+ line single-file Python apps with interactive GUIs (100% bug free) in one shot with sonnet..? Or 1,800-2,000 lines of modularized apps with 2-4 shots..?
I know you mentioned usage regarding information (not coding). However, I find that most people who complain about o1 (irrespective of domain) are improperly prompting it. Because it requires a novel prompt engineering paradigm
My experience, across the board (any domain), is o1 absolutely obliterates anything I’ve ever used before by an order of magnitude at least
I've had paid subscriptions to both ChatGPT and Claude (through Perplexity) for over a year. In fact, it was explicitly because of Shapiro's recommendation that I subscribed to Claude.
For many months during that time, in my job as a senior developer and software consultant, I gave those two LLMs the same prompt, side-by-side, and compared the results.
Claude/Sonnet has rarely given me usable (C#/JavaScript) code: For code-writing purposes, it's been literally useless to me (though it performs much better on non-coding tasks).
ChatGPT, by contrast, is absolutely brilliant: All I have to do is describe what I want from it (as if I was writing business-level requirements for a junior dev to implement), and it produces code so beautiful (in both structure and style) that, more often than not, I don't need to change anything (except to remove excessive commenting).
From the sound of it, Shapiro was always more of a script-writer than an engineer, so it's very unlikely he'll ever be in a position to use any LLM to its full potential in IT.
FYI, Claude/Sonnet has been so useless to me (even outside of coding) that I finally switched my Perplexity model to their default. I truly haven't noticed the difference.
Could you share your best promts?
my experience matches what I think your point is. o1 and o1 mini outstrip all other models for code. its not perfect by any means - but those of us who use AI to code saw the 10x improvement in speed and careful thought and as soon as its context can fit large projects or have some sort of rag index, omni is going to be amazing.
i agree for fixing logical errors deep in application logic it's unparalleled. basically i feel it can reason.
Hey David, thanks for featuring our research on compute scaling. We (Epoch) have some studies on research impact and AI innovations in the pipeline, stay tuned!
I'm sustaining myself with a call center job, guess i'll starve 🤷♂
Get some additional qualification. There is still time of at least 5-7 years.
I assume call center jobs will be nearly obsolete in the next 5 years. The fact that the new ChatGPT advanced voice model is lifelike, it’s not hard to see
Get an education!! Become a doctor or nurse. They are still safe for at least another 40-70 years.
No they aren't. @@jimj2683
no you won't. There is no one starving or being cold in this country. You will merely level out with the bottom 90%.
Small modular nuclear fission reactors. We can make them very safe with present technology
A great example of why modern religion of “safety-ism” is most harmful
One thing I like about this channel is the continued optimism the owner has about Artificial Intelligence.
You underestimate what superintelligence means, by a lot. Superintelligence invents all materials needed and probably solves physics theoretically without LHC.
is it safe to call Superintelligence God?
@@ikoukas 4.76% chance by 2030. I defined ASI as 2000+ IQ, no context limitations, and full-spectrum modality. Also waved my hands and declared a 10 exaflop holographic computer with fractal memory, etc.
@@Parzival-i3x With enough compute that's what it will be
@@zvorenergy Won't happen overnight, but with enough computation it's the reasonable outcome.
@@ikoukas It's a deep time project. The WEF's hopes are dashed by 2030. No bugs and drugs for me, thanks.
Here's your bite-sized briefing:
1. Stay informed about the latest advancements in AI to navigate the future effectively.
2. Embrace hard work and resourcefulness, as these qualities remain crucial in an AI-driven world.
3. Commit to lifelong learning and adaptability to thrive in a rapidly evolving job market.
4. Explore resources like Epoch AI Research and Sam Altman's blog to deepen your understanding of AI.
5. Approach the intelligence explosion with optimism, viewing it as a chance to unlock human potential and build a brighter future.
20:03 your "automation cliff" concept also ties in with Jevons Paradox which I learned about from a podcast with Daniel Schmachtenberger!
4:45 Correction:
10^4 is 10 x 10 x 10 x 10 = 10,000
1^12 = 1 [Not 1 billion]
10^12 = 1 trillion
Once we have robots, scaling up manufacturing and integration capacity in the physical world might scale faster. The robots can built the robots, one makes 2, 2 make 4, 4 make 8, 8 make 16… (after 30 steps you are at a billion). Of course it’s more then assembly, but other machines and production lines can be built by AGI/ASI infused robots, ores can be mined and melted by robots. So essentially it may come down to 1, 2, 4, 8, 16 … And should AGI/ASI operating in quick to apply digital twins of chemistry figure out self replicating nanotechnology, or fusion in digital twins of high energy physics (I made an other comment for more details on that) , that would be yet another sorry.
I expected the video covering Mira and co leaving OAI
Still watching it. Was hoping he'd bring it up!
Tomorrow probably, or the next day
You cannot measure the evolution of a technology on the basis of the cost inputs into the system.. A comparison on output quality should be the indicator i think
Hey, Dave. You got 4 days until AGI! How crazy would that be if a lab came to public monday, revealing AGI achieved internally? 😅
David superintelligence won't need much time to build nuclear fusion reactor because they will build all of this differently and energy, matters all constraints will be compress to an overwhelming point.They will get priorities to obtain the supply of resources and raw materials, and it's gonna be wild to see.
Thanks for this! I haven't finished watching yet, but I wanted to questioningly rebut the idea that (e.g.) goals are an anthropomorphic projection. Doesn't goal acquisition stand as a base for self improvement? Certainly we can predict some objectives from autonomous lower agents (the mouse is starved so its objective will prob. be food acquisition before mating or sleeping); we can predict fewer objectives from anthropomorphic agents (the person is starving but if its objective is to lose weight he may not run a maze for a burger); can we predict the objectives of a higher autonomous agent (...)?
He mentions that there is no reason to assume an AI will have an objective of its own. So my own take from that is that It's anthropomorphic to believe that it is has self interest or even a concept of "self" or "being" as an individual or concept that has to survive or pursue a goal without us giving it one. The fact that such an idea might emerge by itself is purely speculative and not based in what facts we currently have.
Great points! The only thing I see missing is that it might be a mistake to think that intelligence is not always the greatest constraint. That is, if you were talking about superintelligence. If you asked an ape what it's biggest constraint might be, it might give you a very bad answer. Similarly, we cannot guess what superintelligence may or may not be able to do. It will always be many steps ahead of us because it will see reality more for what it really is and be able to manipulate it better than we can. It may also come to the conclusion that intelligence will continue to be the greatest constraint for X. Love the channel. Cheers!
I want a brain implant to make myself smarter
Me want that 2 !
it wouldn't make you smarter. it would make you not quite you.
@@tylerislowe You don't know that
Brain implants connected to the internet? Sounds fine to me 😂
Why? No way you'll end up smarter than the system making the brain implant. Intelligence is now a losing game. The smarter you are, the more you'll realize it doesn't matter anymore. Better off getting your teeth capped for that brilliant smile :)
No wasted words and clear, good communicating!
Thanks for sharing!
One comment, listening to you talk about information, data, and charts while viewing another chart which is similar but different is counterproductive.
I have seen O1 do graduate level physics problems correctly. It takes it a few minutes to solve problems that a human graduate student would take hours to a day to solve. If a preview version can do this I wonder what the full version can already do?
Computers have been able to do in a day what takes a human years since the sixties. This is nothing new.
My job is being retired. Hopefully that job is safe from the AGI robots.
Nope, in the upcoming crusade against the machine, your ass is getting drafted 😂
Thanks David.Clarity and really valuable points to prepare ahead
Why wasn’t voice mode enabled with cameras? It’s completely nerfed without being available to internet. It’s too heavy on compute with cameras. Need serious hardware, cloud computing development?
Yep, the compute isn’t There yet in terms of being able to handle the servers either, hence why they’ve given even if you’re a paying member 30 minute of use time per day. Software is advancing faster than hardware at the moment
A very sober analysis with lots of information and details. I like it, will subscribe! 😁
In 2 years millions of jobs will vanish. Only account for self-driving truck, taxi and bus.
10,000 days? This was foretold by Tool.
"Even the concept of goals is human". Wrong. Animals have goals. Cells have goals. Plants have goals. I really don't know HOW you can be so anthropocentric to think that we are the only species or mind architecture able to direct behavior to an aim
animals don't have goals outside of their natural drives
Individual cells and plants have behaviors, but I wouldn't call them goals. I'd suppose that a goal requires intention, and intention isn't a characteristic we would associate with a plant or cell. It might be more accurate to say that minds that are capable of intention can have goals. Many complex animals appear to be capable of intention, and many simple animals (and other complex systems) either are not, or do not have it in a way we can recognize at this time.
Ultimately, I think our current understanding of consciousness (and the technical vocabulary associated with the field) is insufficiently advanced to justify a strong stance on more than very broad ideas about it. If we don't manage to kill ourselves off the future empirical science of consciousness is bound to be endlessly fascinating.
Interesting video.
The law of unintended consequences tells me that we have no idea what is about to happen.
I am certain however there is no utopia on the horizon.
People making these declarations are smart computer nerds poorly equipped to Devine the future.
@@dankmemes7658 you're stuck in 1800. I think you need an ecology course plus a read of Dennett. Humans are animals and are "natural", everything is "natural," meaning everything is part of a complex system made of chemistry, physics and things that happen. Humans are information processors based on DNA and the cell. The DNA (and RNA maintaing it operational) is literally your code, and it sets your main drive to pass your genes, then the environment selects what is more functional for that aim. Any entity based on the DNA has ultimately the same goal and any strategy we found, from group hunting of the whales to forex trading, is ultimately directed to keep us alive as long as possible to reproduce. Of course you need to see this on a systemic way, with an eagle eye, and not consider the single individuals that might reproduce or not.
That said, what is a drive? We can define it as a directed behavior to an aim, normally including a solution to a problem. Humans and non-humans both invent solutions to solve their problems, and get what they seek. Humans, for a lot of factors occurring together, have just brought it to a very sophisticated level of abstraction. But why do you think we created complex societies? Ultimately to fulfill our needs to eat, stay safe, stay warm, which all lead to successfully reproduce. Curiosity, fear of loneliness, philosophy, technology, love, are all devices to the same aim, the same drive as any other entity: keep on existing.
@@dave7038 this is very arbitrary. Assigning "intent" to "complex animals" (what is intent if not a narrative over a process happening in our brain which is ultimately a drive? What is complex?) but excluding "other complex systems" (why?).
Seems the classic attempt of humans to compare what they see to themselves, and if it's similar enough then they start to ADMIT it can have higher order processes, otherwise it's just dismissed as "not having X". We shouldn't "associate characteristics" based on our fears and hopes and foresight or lack thereof. We should be much more scientific and open minded. And there's a massive amount of scientific evidence that animals -even so-called "simple" insects make choices, reason, have preferences and personality traits.
Also in the video he talked about having goals, not "intentions", volition or consciousness. And said only humans have goals.
That for a computer scientist or an ecologist or a cognitive scientist is just laughable.
I'm glad software people are discovering basic economics. Production = Land + Labour + Capital.
Hey David. Have you been thinking about the problem that intelligence could be a problem for many humans? I mean that you need to be more intelligent to grasp or "survive" in the workforce because its getting more and more complex. I dont know if this is phrased corrctly.
Oh that’s very interesting (and neglected) aspect of the near future work force. Jordan Peterson (before he went hard on the paint) talked about how a crazy percentage of the population have an IQ score that’s too low for the US ARMY to accept. It’s a real thing. Which begs the question, are we going to have to dumb future jobs down intentionally so that more people could participate? Or do we just say screw it and let AI do it all? But yeah it’s a valid point that you bring. I’ve witnessed first hand how the company I work for have gradually restricted who they hire over the last 5 years…they no longer accept someone who doesn’t have a college degree or attending college.
No more older hires as well, they’re going after “kids” in their early 20s
Yep, we're already at the point where people below a certain cognitive threshold are effectively useless in terms of their value as a "worker." I think that the latest language models raise this threshold from somewhere around 80IQ to somewhere around 100IQ. I imagine next year we'll see news of substantive business success stories around the idea of wholesale replacing business functions with agentic systems. By that time the bar will be even higher...
I'm excited and after I've lost my job and house (which fell by 50% in value) and got back on my feet I will enjoy the utopia. Even with the thought of that I'm excited!
You’re a model for me David ! Continue your excellent work 👌🏾🙏🏽
The universe is a sequence of convergences. Particles don't exist until the wave function collapses. And society is converging on an environment where there will be far fewer humans and power will be concentrated in the hands of a few who have everything they need and want. Technology is evolving to accommodate.
Well it shoul be even exponential on the logarithmic scale
The largest constraints for this new technology will be the initial need for new infrastructure and the logistics around deploying it. Once superintelligence is comfortably integrated within our socio-economic systems the sky is the limit. On our own we don't come up with major paradigm shift technological advances (think printing press, electricity, computers) all together that often. There's no telling how quickly advances of that magnitude or greater will be invented by a superintelligence. Not to be too hyperbolic, but material constraints are no longer a problem if you have star trek style replicators, energy constraints are no longer a problem with nuclear fusion, labor constraints are no longer a problem with super advanced robotics, etc etc.
Or this could all be a bubble and we never get superintelligence
The point is though is that we have to think outside our usual box if we're talking about something that is genuinely smarter than us.
Totally agree on the materials/time issue. People seem to think we’re gonna be living in Star Trek the day after agi, and that couldn’t be further from the truth
True, we might get Star Trek before AGI.
The real unspoken danger of the AGI-ASI is not the loss of jobs. The loss of jobs should probably be the lesser of our concerns. Our new jobs will be in the infantry putting chips into all military vessels known to mankind
It might be wishful thinking but AGI/ASI enabled militaries are extremely surgical by definition which would make conflicts more crippling to all parties involved which may or may not make modern (near future) warfare simply uneconomical to engage in.
Hopefully
first!
Good summary of the data, excellent commentary. I agree on Claude vs. GPT, but using the Memory functions helps a bit.
Shouldn't power draw go down or remain horizontal so that we get better performace at either same power draw or lower power draw
Also vision training data is a limitless resource. A pretty strong benefit.
"The concept of goals is a human centric concept"
Not if you redefine it. A process that results in the world being in some state can be said to act *as if* it has the goal of reaching that state. The result is effectively the same as having that goal and achieving it. It has an 'effective goal' if you like.
A thermostat can be said to have the effective goal of regulating the temperature (the 'intentions' of the system are hardwired by an engineer into the system, a bug in the system could be said to change it's effective goal). An LLM can be said to have the effective goal of outputting a plausible imitation of an example of the training data if that example had started with the current context tokens. Given some specific context and more specific training examples you could say it has a narrower, more specific goal (the 'intentions' of the system are learned from training examples and steered further by the context).
It looks like you're forgetting the fact that the Intelligence "explosion" will happen in ALL sectors at once.
The thing about LHC is that alot of the tech had to be developed on the fly, and better tech became available to them changed so much throughout the process so that by the time it was built objectives had stretched and mission creep took place but you just ended up with something that looked the same at the end but that was so much more. To contrast look at what happened with DNA sequencing.
:) bring on the AI revolution!
Lfffffgggggggg 🔥🤖🚀
The time is upon us. Humanity shall rise far higher than ever before.
@@WhatIsRealAnymore you are the resistance LOL
@@NeroDefogger Is that supposed to be a critique? It's straight lines on a logarithmic graph of compute towards ai over time. What do you think Moores law is?
@@WhatIsRealAnymore The rich need us to be consumers of goods that we have enough money to buy. Cars used to be for the super rich only, now they are ubiquitous. The price of tech has fallen dramatically. The first basic 4 function hand held calculators cost $80 in the 70s, then you could buy a credit card sized calculator for$1 in the mid 90s and now you don't need to buy a calculator at all - it's one of the things your phone does, amongst its myriad other functions. In summary; the rich make their money by selling to mass markets. T'was always thus.
I talk about constraints, but I didn't get: constraints to what? What's the goal?
The real question is how the hell do I get a job with this stuff?😅😅😅
You won't need one in the new socio-economic paradigm.
@@gdok6088 yeah you have to survive until we get there if we do get there I'm not convinced the rich are going to share
@@Nicole-m1p4fall u need is food and shelter can be had for relatively cheap. And health care Hopefully u have government backed health care.
@@devonhurd7013 you're making us some assumptions and I'm just using general terminology when I talk... You know sometimes we just use casual language... I studied computer science in college but I was a minor not a major.
I'm trying to get involved
So we just jumped up 2 years for AGI from 2027 to 2025? Just by what happened with OpenAI recent update? I thought for sure it was tracking for 2027. If that is the case, then surely putting a time line on Ai at all is premature until we understand the progress better. I'm new to Ai so please educate me if I am missing something.
I love that you are experimenting with your format in the open and encourage you to keep doing so. But the rotating / looping images as your Video are driving me crazy 😅
I would love to see the video match the audio again. The graphs are cool. But it is super frustrating seeing the same graph again and again. I groked them the first time 😉
🖖 ❤
Question, knowing that's exponential, do we run out of ressources like energy, data before ASI ?
I would like to push back on the idea that is still going to take as much time as you think. There's a good possibility that I have misunderstood your point of view but if I have not and I understand things clearly, then I would disagree and I would use the actual explosion we've seen inside robotics due to artificial intelligence in such a short time and comparison to the 10 plus years of Boston Dynamics robots we've watched. I would argue that that rapid progress is an indication of the kind of possible progress that could be achieved in multiple different sectors of science and engineering.
THIS IS NOT MY OPINION, THIS IS FROM OUR CORP LAWYER TEAM. We are a 600B company. The only error possible here is my retelling and understanding of what I was told.
The CXO execs are beholden to the board. The Board is legally required to direct the exec to take the actions that return the best profits NOT to maximise employment. If the Board does not do this (as is their charter) then the shareholders can sue them in to the ground.
Therefore if AI makes more money, the execs and board are legally compelled to do so, and also fearful of legal repercussions if they do not.
So, the ONLY thing the board\execs can do to decel is to 'wait for more evidence'. That exactly where we are right now.
I think you’re underestimating how much AGI (or really, ASI) can help with navigating many other constraints, eg raw materials -> R&D on new materials. R&D cycles at digital time-scales may be unfathomable
This upload schedule is awesome
1:30 None of those statement were promises but more so confident speculation on what our world would be like in an age where AI in general continues to develop, there’s a reason this isn’t available on OpenAI’s website.
My take on restraints being things like money and resources: How long until an AI finds better solutions for those things than we have?
You can ask and it'll give you answers.
So, progress may seem to slow down because AI training is taking longer which means new large models are released more infrequently, but the actual progress is still going just as fast as before.
We won't ultimately need to massively increase power generation. That will only be necessary so long as we continue to use software neural networks running on silicon, an extremely wasteful process. In time, we'll likely develop practical neuromorphic processors that will be far more efficient. Einstein was pretty smart and his intelligence was powered by less than 20 watts. Maybe we'll never reach that level of efficiency but there's a heck of a lot of space between that and where we are now.
I can tell you exactly when we will all be able to stop working - 2027. How am I so sure? That's the year I am retiring.
And that will create 5 new job opportunities :)
@@mrpocock Which 5 more AI will occupy.
How much biomass will be necessary to do that scenario? all the things will be rebuilded, how much land if the moore law is decreasing to build new factories, scale datacenters, the computation required will be unlimited, there are a lot of resources constrains, maybe the AGI is tecnically right the corner, but access economically is not so good, its pull all the others industrial facilities to the sky, including environmental risks.
We can barely fully understand our own intelligence. Given the current state of measuring AI with benchmarks, I doubt we would even be able to tell if an AI is generally smarter than a human... AIs are already better than humans at numerous tasks. But coming up with an AI that is generally smarter than humans is something we have no way to currently measure. Short of being able to interact with it like a human and having it produce consitent brilliant work exceeding most or knowledge workers. With a clear ability to either avoid or recognize its mistakes more clearly than we can.
As Elon Musk put it, work will become more like a hobby and optional. I think that’s the ideal end state, not entirely gone if wanted but at the same time no pressure if unwanted. Plus its nature will change, it mights seem more like a hobby for us
That is great news for the rich BUT bad news for everyone else !
The moment those who control capital and corporations can produce the goods and services they need without the help of the middle class and the poor, what makes you think they will share the proceeds ?!
If robots can provide the rich with security, work on their farms, cook their food, clean their houses, make their clothes etc, everyone else would be doomed ! !
Anyone one else who doesn't own any significant assets (land, houses, factories, patents, etc) would be rendered useless. And the vast majority us belong to that category.
Less pressure and stress and some proper vacation time for you guys in the USA
You are incredibly naive
What do you eat then and how do you pay for everything if nobody needs your work?
@@StefanMoises well the production side is exactly what might explode once it is fully automated, including the production of food. For one prices would drop significantly, right now human work makes up a large portion of end product prices. Regarding buying capabilities, once it becomes obvious there will be political pressure to solve this which might come in different solutions. UBI (Universal Basic income) and UBS (universal basic services) might be one solutions, although this would make one quite reliant on government as key distributor. Alternatively people might own and freely manage portions of productivity (robots, AGI, …) in which case you would still have a capitalist system in which people decide what their productivity is aimed towards (without the need to work themself, they would just supervise their portion of the automated economy. They might even direct their oversight over their share to an AGI to manage for it, if they don’t want to do it themselves. Essentially it’s an allocation problem and a decision whether we prefer central oversight or a decentralized capitalist system. Individual work as we understand it would not be required in any case.
When will Heavy Silver get a audiobook?
Read "Frankenstein" by Msry Shelley.
Automation and energy reduced jobs. Imagine the number of workers needed to replace a simple farming tractor. A plc replacing an elevator operator. People still have jobs, New type of jobs. But we have a lot more free Time. So maybe there is less work to do than in the past.
Remember 6 mo when you said AGI by August?
🎯 Key points for quick navigation:
00:14 *📈 Sam Altman's Blog Analysis*
- Analysis of Sam Altman's recent blog post on AI advancements.
- Concrete predictions include AI as personal assistants and personalized education.
- Emphasis on slow job changes and the need for extensive AI infrastructure.
03:40 *📊 Epoch AI Research Insights*
- Analysis of Epoch AI's research on AI training compute trends.
- Doubling trends in training compute, training costs, and data for AI.
- Observations on the faster scaling of language models compared to vision models.
08:52 *🌐 Economic and Scientific Constraints*
- Examination of economic and scientific constraints beyond intelligence.
- Focus on material, energy, time, and space limitations in scientific endeavors.
- Discussion on the broader implications of AI scaling in various sectors.
Made with HARPA AI
Automation has always added more jobs in the past. However, there is a lag between the destruction of the job and the creation of new ones. And as is always the case, this time might be different. The frontier of automation may move faster than the lag.
There is no way to conclude definitively that more intelligence wouldn't make a difference without access to infinite intelligence.
12:50 ya, energy always the floor level limitation. And my main concern, what percentage of permanent unemployment will create that social support system? 10%? 20%? I just hope it’s sooner than later to prevent so much unnecessary suffering over the transition.
is there a way to send like a request for a type of video?
i ask because I was wondering if you could make a video about ways to live longer that are arising or already in development like the mitochondria restoration in japan. but I was wondering if you have any others that you know of, have research on or anything that you could make a video on, just grouping everything in one video
Intelligence is going up rapidly. The next big limiting constraints are labour and raw materials. The trajectory of the price of labour towards 0 (enabled by robotics) will help alleviate those constraints and unlock an acceleration away from entropy at least whilst matter and energy still exist.
Glass chip substrates coming in '25 or '26, may make a difference?
Broadly agree with the broad view on constraints… and for science „intelligence“ is far from the main constraint. We already have vast intelligence available in the living human population, and those of us involved in research can only rarely get the funding and other support needed to follow, develop and apply our ideas, no matter how motivated we are. I doubt an AI that is not self-motivated and switched on can compete with a motivated, switched on human.
These are the videos that you excell at. I really like the videos where you basically do the Dr. Nick Bostrom work. It's true that your predictions might not be 100% accurate but at least you show an informed scenario.
11:09
"Electricity was scary when it came out."
“I couldn’t have electricity in the house. I couldn’t sleep a wink.
All those vapours seeping about.”
-- Countess Violet Grantham
;->
Not sure that automation created net more jobs. The number of jobs might have gone up (even that has to be checked/verified) but I would like to see how that number compares to the population growth. If the job growth was +3% vs +5% population growth than we still lost jobs - since without automation all jobs need to be done by people.
More important though: Do we want people to do all the jobs? The majority of people work to sustain themselves. There's quite a disparity between what people do for money and what they do for other motivation (like free work in the community etc), and at least for Germany over 60% of societal important work is unpaid free work on voluntary basis (based on hours worked).
Every gain in life carries a cost. For every achievement, something must be sacrificed, and even in success, there is always a trade-off. If AI were to solve all human problems, the trade-off might be the very things that make us human-our need for purpose, independence, and emotional connection. In exchange for a world of efficiency, stability, and optimization, we might lose our capacity for creativity, resilience, and the unique struggles that shape our individual and collective identities.
Keep cheering for utopia...
That is zero sum thinking; you sound like a religious person as well.
@@daniellivingstone7759I didn’t know being religious is a disqualifying factor. Fire, agriculture, the printing press and germ theory, etc were all brought up by deeply religious people.
I’m not a religious person myself but he might have a point: agriculture didn’t necessarily make us happier (at least according to historians like Yuval Noah Harari) and it definitely brought about the institution of monarchy, slavery and organized warfare. Germ theory did make us live a longer healthier lives but at a cost as well.
One way op could be wrong is that if the future AGI/ASI is achieved, we’ll arguably be able to expand infinitely into the solar system where, arguably, like minded people could build the societies they desire the most in terms of privacy, goals, faith (or lack thereof), level of tech allowed, etc. Earth is as small as it’s big, you know.
@@theWACKIIRAQI I agree with what you are saying but gains do not always breed costs. The ultimate gain will be life extension by hundreds of years. This will only breed a cost if it precipitates large population increases. If AGI mitigates these by somehow discouraging reproduction and making people happy with such choices by advanced manipulation of brain chemistry or neuromodulation then there is no equivalent loss unless you believe it is God’s will that humans should nit alter their biology and multiply.
Thanks David !
Great information.
Let's hope we don't destroy ourselves the old fashioned way before utopia is reached.
Curious if you will address the AGI in 2025 next year if it's not here.
I am watching this space long enough to have first "AGI next year" claims made and past 2024 was big year for these kind of ideas in late 22 and throughout 23
This year marks 10-year anniversary of the "forward-thinking" video "Humans Need Not Apply." Rewatch it, it's still relevant.
Sam and others like him are speaking about the future from their shoes, shoes that will get wealthier as this AI train keeps rolling. For the rest of us outside watching, it will be a mess, maybe one of the biggest messes in adopting new tech in humanity's life. These large companies are like boulders rolling down a cliff, once they pick up speed it's nearly impossible to stop them, so they will replace humans, and become bigger and faster while normal everyday people suffer.
Just look at the world we live in, we here in the United States have a large amount of homelessness, but at the same time, our politicians are banked billions. We have the money and power to stop that right now, but our government doesn't. Not sure what would make anyone think that will change unless it's to get worse.
I am all for AI, I know it will and can do amazing things, but I seriously doubt we average people will get too much if any part of the benefits it will bring. For any real progress to happen, a lot will need to be changed by people who frankly don't care about anything or anyone unless its money and power.
Training time will not keep going up at 20% per year. It quickly reached a practical limit which is about 1 year, I would think
Actually I'm disappointed of how slow AI is advancing
I see a slight mistake in a key assumption here.
Most constraints are the product of our cognitive limitation.
It is typical of very smart people to find far easier ways than most to achieve a given goal.
Super intelligence will be able to dramatically reduce or even side step most constraints by figuring orders of magnitude more efficient approaches.
As just one example the expensive race to build arrays of nuclear reactors will be cut short if the first thing ASI does is to unlock compute energy efficiency in the ballpark of biological brains.
But you don't take in account embodiment of agi will result in no physical restraints. robots will build other robots, energy plants, new cities etc
On the ”Automation Cliff” : Human Curiosity and Creativity is boundless and Infinite. Just look at where it got us! THat-in itself is an Assurance that the AC is an intellectual abberation based on a foundational underestimation of Human Ingenuousity. End of rant.
Thx. Well analyzed. Numbers are sometimes scary .We however are Humans with Great Capacities. In Short: As Machine get faster, time to Adjust get smaller. Meaning: creating the next best Cooperation amoung All Nations, laying the Differences aside in Order to present the best in us with higher Accuracy & Precision to overcome , what ever the Barrier might be. We did. a Great Job in the Past,…doing it Now & keep going Strongly for the Time to Come.
Actually- on the scaling speed difference between language and vision - totally logical as from McLuhan’s ”Xtensns of Man” postulate following can be inferred: speech=1D, vision=2D, agency=3D, sex=4D, (so of course vision will consume more compute than vision, probably slowing down scaling?)
I think the automation cliff is when there aren't humans who can fill new jobs left when AI automates nearly everything because most people are of average intelligence and the last few jobs to be automated will likely be for extremely intelligent people with extremely high levels of experience in their fields that normal people can't be hired to do.