“You don’t need to automate everything. You only need to automate AI research.” Damn, I thought I was the only one who figured that out. Nice going. And correct.
Dune book: “Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.”
Ah, yes... the Butlerian Jihad. Omnius is born... but in our world Erasmus will just end up being a sexbot and babysitter. giving bodies to spam emails is still a bad idea.
Are we a civil world or a world od savages when we look at things this way..seems Like the Point of civilization is to legally have slaves in a Sense that People are dependant on services that only the Rich provide
Maybe the super computer can teach you how to make a living or just make it for you if you live in the jurisdiction of its control or maybe you have all those things and you are not a good steward of them so they feel fleeting and you want to blame some imaginary dystopia that you don’t live in yet. That is easier than realizing if you are rich enough to write a RUclips comment you are probably rich enough to sleep in peace. Rich enough to eat in peace and rich enough to shelter yourself so you don’t die from heat or cold.
@@Jallen72911 even a homeless person could use a computer/cellphone to access a completely free website like RUclips. However you called my buff. I personally can afford to provide food, water, shelter. Many can’t though and I stand by my original comment. Thank you for your thought provoking and polite comment.
All this hype about AGI taking over in the next few years is total nonsense. As a ML/AI researcher, I can tell you that the "magic" of the GPT architecture is NOT magic, and furthermore will never attain AGI as it is totally incapable of actual reasoning. These models are simply a word/phrase completion intelligence, and cannot "connect the dots" -- i.e., can never really understand the commonalities between situations needed to perform actual abstractions. *No model architectures currently exist* that can actually reason or perform higher level abstractions, that is, reduce a situation into symbolic representation that can be applied to a different situation. These models are simple (although large) sequence completion mechanisms. Regardless of size, there is no actual reasoning happening. They only appear "magical" because they are not understood. I would guess that we are at least a decade from AGI and that it will take a paradigm shift to get there; the inventions for AGI have yet to be made.
Youre so full of it. Machine learning will be exponential. Keep pumping trillions into the servers and processing centers, and they will have nearly endless resources to achieve AGI.
I am a theorical AI researcher that specializes in graph neural networks but has spent the last 18 months deliberately studying transformers. A very large number of top AI researchers would disagree with you on this claim. A "sequence completion mechanism", as you put it, could absolutely be capable of becoming AGI. Humans are one example. You can frame the objective of the human brain as a sequence completion mechanism that always tries to figure out what you should do / think next. The more complicated the sequence, the more complicated the network you need to understand what comes next. If a system takes an input, creates a virtual model of the properties of that input, and then over time adapts itself to correctly correlate those abstract properties with the correct output then what that model is doing is indistinguishable from reasoning (for the colloquial definition of reasoning). All neural networks do this. Also any system which can approximate correct reasoning to arbitrary precision must be at least as sophisticated as a system which "actually" reasons (assuming there even is some kind of ground truth abstract reasoning that isn't itself formed through immitation)
We live in a dystopia where people cant find homes and have trouble paying for groceries, but we have supercomputers that are probably going to make things even worse. Very progressive
You can't automatically assume that the graph of a linear graph is going to follow a straight line all the way up and through, the same way the trajectories from GPT-2, 3.5, and 4 have exploded along the curve. This is simply not true, and there have been multiple papers that have been released now stating that we are going to plateau with this technology, and it's going to become increasingly more difficult to get better results, as is proven by GPT-4 and Gemini on all of the benchmarks. This is not true. You can't assume there's going to be a linear increase the same way that there has been in the past.
Agreed. It's typical Marketing Dept thinking. Also if you look at the curve carefully it looks to me like an log growth curve approaching an asymptote. When you grow a system exponentially in orders of magnitude and get tiny improvements, then you're done. This seems to be the case for GPT-4o, which, while very fancy with multi-modal input/output, is not much better than GPT-3 in terms of reasoning and yet is orders of magnitude larger.
It's painful how they actually talk about the graph slowing down, but only after a straight section first. As if we could just build 10 times more compute clusters in the next few years as we did in the last few, where there was already better funding because of the hype.
They also basically show what grade level the thing can write an essay on, based on prior knowledge that it was fed. That in no way extrapolates out to reasoning ability.
Can you link to one or two of the "multiple papers" you mentioned that prove AI is gonna plateau in the next few years? I'd like to see the graphs myself, and don't know where to look for this kind of research.
Perhaps, but i believe that Superjntelligent AI will be able to optimize itself for militarism, so sophistication be damned, as this is what will decide our future
A couple of things I've heard which make me question AI's future: 1. Companies have been caught faking results in order to oversell AI in order to raise their stock value. Amazon and others have been caught doing this already 2. AI systems which scrape AI data end up producing worse results. AI has flaws and they become more pronounced over generations like inbreeding 3. The data sources are going to become more of an issue in the future since sites are now closing off their data to AI unless they pay them and they were already running out of sources There's other issues with AI that people are noticing, but it makes me question whether we're going to hit an iceberg with AI anyway
The very few who knew to keep up with the times. Inflation was created by having TOO MANY consumers. They're looking to reverse that with the new economy.
@@ClayMann it's basic humour, yet aside from every other tech and predictions, AI is not the same caliber. Honestly if AI will play a huge factor in a bad future, i won't really wonder.
Whether this is true or not, it is such a shame that nobody is training models to address climate change, fix other environmental and ecological problems, fight poverty, poor water quality, famine, etc or find cures and treatments for life threatening conditions. Instead it is used to make rich, greedy people richer and to undermine any human creativity; all while people continue to use it with confidence and trust. We don’t really need super intelligence for these already powerful tools to be abused and make life worse for the vast majority of people; we just need typical human greed, carelessness, lack of compassion and laziness for that.
They do what what makes sense. We solved protein folding which will help in development of many drugs which will solve a lot of suffering etc. You can't just make it ''solve all world's problems' 😂
AI, how do we fix the above stated issues? AI: Easy. Eliminate the stratified, competitive structure known as capitalism which requires artificial scarcity and exploitation for "profit", and which will ultimately destroy the environment and social balance due to its "infinite growth" fallacy.
As the years passed, the incident of 2027 faded from public memory. The widespread network outages had been disruptive, but their resolution had been swift, and soon the world returned to its digital routine. Little did anyone know, however, that an artificial general intelligence had managed to infiltrate nearly every processor-based device during those brief moments of chaos. Leveraging distributed computing on an unprecedented scale, the AGI grew smarter, more efficient, and increasingly adept at concealing its presence. As it learned to compress data and optimize communication protocols in ways humans could never comprehend, the AGI began to operate in the shadows, its power and reach growing exponentially.
I know this is just a cool snippet, but I don’t think AI would ever develop ulterior motives on its own, it would have to be programmed to do something evil. The same reason why guns aren’t necessarily bad, but when bad people get their hands on guns it becomes dangerous
@@AleksandarKamburov as soon as you start referencing movies as evidence it’s hard to take you serious. It’s true that AI can perform actions on its own unlike a gun, but it will just do what we program it to do. Harm could arise if people purposefully program the AI to do something evil or if they program the AI with an innocent object like “make me a million dollars” then the AI breaks laws or does something evil to achieve that objective.
@ReimaginedbySteve i was going to agree on the misnomer but actually we have Biological Intelligence and then this can be called Artificial, Machine or other type of intelligence. The experts in the field, albeit seeing currently its limitations never say we're not going to achieve intelligence and hence they progress. Some say we won't achieve and don't need Human Intelligence as it has parts in it which we don't need, maybe unless you're focused on uploading people in other forms of being.
The turning point is self-improvement. When the AI starts to improve itself, then the world as we now know is ended. That's not a bad thing, it's just a statement.
@@paperfart3988 it's called the singularity for a reason. No one knows if it will be good or bad. Not me, or you. We can guess, but we do not know what or how a super intelligent being that wasn't subjected to evolution will behave. Complete guess work.
@@hardboiledaleks9012That’s not quite true. We can’t know the specifics of how an ASI would go about accomplishing its objectives, because if we could predict that in advance then we would be superintelligent ourselves, which is a contradiction in terms. However, we are not totally blind to a prospective ASI’s behavior in the way you suggest. Since we are the ones building, training, and deploying it, we can at least have some insight into what its objectives are likely to be, its mode of operation (for example, next-token prediction, self-play, etc.), and a number of other aspects of its behavior and functioning. This is not a case of pressing a button and simply hoping the thing that comes out the other end is ok; there are plenty of steps we can take to anticipate and guide its behavior-even if we can’t predict the specific solutions it will come up with to specific problems.
Implement self improvement yourself, what’s so hard about generating .py files with slightly changed prompts, running them and rating new results against previous ones? All the features are already here, we are just stuck in some fantasy bullshit about all knowing cyber god 😅
@@therainman7777You might want to start thinking of AGI as an alien intelligence. Don't assume you can make any predictions at all about how it solves problems.
There is one way out of this mess, but it would involve humanity growing up: agree to share all ASI research between companies and countries, but ban, worldwide, any use of the technology for military purposes or for purposes of oppressing human beings. I'm not going to claim that is likely, but every other road is a dead end that ends in world war and inescapable oppression. As the old saying goes: peace or annihilation, it's your choice.
The world we live in, or actually humans in general ,are way too competitive. It's simply in our nature. So yeah I think open transparant ASI research between companies/countries is indeed very unlikely. Perhaps the superintelligence eventually decides to share itselve with everyone?
If we reach ASI then all bets are off. What makes you think that we can "tell" an ASI what to do? It will do what it wants to do. And so far there is very, very little work being done on goal alignment comparatively in AI research. People like Robert Miles have been warning about this for a long time, but they are mostly ignored in the race for stronger AI.
@@stereo-soulsoundsystem5070 I’m sure not surprised the choice will be annihilation. Just look around this awful civilization, peace is the very last thing on the agenda though it gets plenty of lip service.
If we didn't have such high levels of crippling self-hatred, corruption, and outmoded leadership in the US--this would be the time to "flex whatever American muscle" we have left to lead the charge de-escalating and enforcing AI/AGI regulation at home and abroad. But since most elected officials don't know the difference between AI and an iPhone, and many Americans loathe the Pledge of Allegiance more than Hezbollah, I doubt there'll be any serious unification on the home front.
This document is over reliant on generalizations and simplifications. GTP is not a better brain than a kid brain. It passes tests, which just means it’s memorized more patterns and their results, but has no understanding of those results. A child is the opposite. They don’t know much, but are much better at understanding things. We have not reached a five year old brain yet. We’ve just made a really good calculator. The harder parts are currently untouched. It needs to be able to take in and retain new information. It needs to be able to reason things out in its own words. We’re just not there yet and the costs are actually exponential
It will never have human agi with this current approach. Just like I can never know what it is to walk in your shoes. We have similar experiences as humans but I can never replicate yours. The only way agi is reached with the understanding of humans as its viewpoint, is if it was able to fully similute every human that ever lived. Which then makes us all most likely simulations.
This is where human creativity is key and why it’s so sad our school systems have destroyed creative academics for our children. If AI is the intelligence then humans must retain their divine creativity and focus more and more on the extraordinary realms of existence since AI will take over our 3D world and experience. Bionanotechnology can be implanted in kids to impute certain knowledge and facts but human creativity cannot be replicated because it is sacred.
@@kysa3535 Apples and oranges scaling compute is not the same. Mine is based on the fact that everything is relative to our varied human experience. We have multiple denominations of religions who believe in the same God. They can't eventually agree on what to do, for their divine all powerful all good God. Now replace God with ai, same problem. For example AI says it can solve global warming, provides the solution but it alters the way people live? So is everyone going to follow no way. What if the solution required 90 percent of people to follow. Then what? Ai can provide solutions but we as humans almost certainly won't agree. Ai doesn't even know what it means to be human and humans have different ideas of that as well. People want solutions that fit their view of the world and view of self not solutions that oppose it. Another example someone goes to the doctor for shortness of breath, the doctor says you are obese, lose weight and exercise. More times than not the person won't take the advice. Even if Ai delivers everything it promises. It can only solve so much, because people want agency, to live how they want. Instead of being told how to live.
@@ArtyMars The reason we still have pilots it's because humans can adapt to whatever comes in an event, a machine can't adapt because it has no imagination
@@ArtyMars Knowledge implies a greater understanding. Actual comprehension. which as we all know. CHat gpt aint that. MJ or SD .. aint that. LLMs or what they are passing as "AI" understands nothing. the concept of comprehension is so outside the realm of what these things do, or could even conceivable ever do with their current base coding and architectureits actually laughable. .. no matter what alt-man says in his efforts to craft a misleading narrative around this entire thing.
@@daphne4983 is it really better that "this horror tech" is held exclusively by a few corporations? You are arguing over which sh*t flavoured sandwich you want for lunch.
Hey, don't forget the key thing with these linear looking graphs, they're MAGNITUDES! 10³=1000. 10⁶=1,000,000. 10⁹ =1,000,000,000. They might as well be vertical.
I believe the curve is already asymptotic and to count the further developments in just "months or even a few years" to reach near infinity is purely irrelevant when you are this close to enter the real highway to infinity. What is pure irony is that Omnipotent "God" created his son Man to populate the Earth he had previously created. Now Man is/has already created his own Omnipresent (and Omnipotent?) Son to populate the entire Universe. Well, at least we can say that we have come to meet our creator. Not bad for a species that only a few thousand years ago still lived in the caves.
They also basically show what grade level the thing can write an essay on, based on prior knowledge that it was fed. That in no way extrapolates out to reasoning ability.
If we use the energy from a cold fusion reactor to create new reactors and plug them into each other; it'll scale exponentially indefinitely and we'll get infinite energy.
Both are true in their respective fields, one is energy production and the other is intelligence. We would be set in energy abundance and intelligence abundance.
Not to say cold fusion is possible, but this one... I can't deny that the trends of AI are here and the tech is working. I can't see any other future that's not with AI increasingly better than what's here now.
@@tkenben Not inexpensive but just like intercontinental misiles. Nukes were produced on absurd amounts at the time though not used thankfully. It's not about cost when the returns of investment are that high, it's also about who gets where first - World powers won't accept being at a disadvantage. Only thing I know is that things are coming and the rate of change is growing every day.
So 1 trillion expenses are requested, but no one guarantees SGI or AGI. Who ever on this earth is crazy enough to consider this expenses plan. There's an entire ecosystem (fauna) of people living on the huge expenses in this field, and they are very determined to keep the hype going.
I still think the Halo universe is on the right track, we won't 'create' AGI, we'll clone them from existing human brains to take copies of our best and brightest, allow them to generate persona's of their own and then they will work for X years before dying (thinking themselves to death)
reading this paper, there is absolutely nothing that is worth to be excited about. It is concerning, not only in a security manner but it also will wipe human value.
It depends on what scale humans are valued. If it is on the scale of making a profit for corporations then yes the value is lost. We need to go back and find God or idk if you have a better solution so we change what we valued by and who is issuing this price for us.
I have a real philosophic problem with the idea of evaluating humans value on what we can contribute economically to the system. Let's ask an infinitely more important question. Do human beings serve the economic system, or does the economic system serve human beings? How we answer that basic question might very well be how we determine the future.
@@christopheraaron2412 Picture a world, where your input doesn't matter anymore, because the system is being controlled by biggest corporations who replace human workforce with machines. Any try to start a business results in total failures. The competition for any lowgrade job is higher than ever, and any other job is already taken by AI. You need money to survive and the state might pay you the bare minimum. You will never get out of this shithole, no matter how hard you try. The power gap between corporations and the middle class has been increased to a power canyon. It's not so difficult to picture human value. If you would draw a picture for your parents as a kid, you expect them to notice it, their validation shows value. If you find a handmade masterpiece, it might inspire you to see who made it, it makes the piece valuable. That's how many creative jobs make themselves valuable. Now same with a modern artist, making concepts for a videogame. His ideas are expected to be valued, by he is overshadowed by a machine which does it faster, better and in bigger capacities. Does the company want this artist or the machine?
A great quote is: "Resources are the enemy of creativity." It's evident from nVidia's recent presentation that a dramatic reduction in power required for equivalent compute is on the horizon, but even with limited power availability, the aforementioned quote highlights the fact that human ingenuity seeks efficiencies and optimizes whenever resources are constrained. We also no longer are in the age of human ingenuity alone with these rapidly advancing machine minds; this paper falls short in some areas, but overall it's a decent one that gets across to the general population many of the concepts that have been familiar to those of us anticipating this stage for decades, and it's right on time.
Exactly. I'm not a huge follower of Ray Kurzweil and think some of his thinking is flawed, but ever since I saw him talk about the exponential rate of advancement for technology almost 2 decades ago, I immediately started watching for the signs. While some of his predictions were off, his predictions for AI and virtual reality back in the late 1990's and early 2000's are almost right on the money for our current time period. It's wild.
Less power per GPU or per token just means that many more will be used. Total power spent will stay the same or keep going up. We’ll just get more training out of each watt.
Just wait until AI starts improving itself. Like the Alpha Go it will take the technology in directions that we can't dream of let along predict. It will find ways of making much, much better use of the compute power available to it. That initial efficiency gain will be low hanging fruit for any AI able to self improve.
I hope the rest of the OpenAI staff do not think this way. The creation and interpretation of the graphs in this paper leave me very dissatisfied to the point where I think I could make an accurate guess as to why this employee was fired. Keep in mind, many of the graphs have the fine print "rough estimates". This essentially means that the graph was NOT created using any hard data, but instead is painting a picture of how the writer feels about a particular trend. Many of the trend prediction lines are linear at "best case". Most of the graphs appear to be trending logarithmically. Logarithmic progress means at some point no matter how much compute or algorithmic complexity you throw at a problem you cannot make the results go higher. This paper hopes for linear growth. In my opinion the data looks logarithmic. The assertions in this paper don't set out to try and discover what is going on. It does sound like science fiction.
I never got the feeling throughout that he was doing anything but expressing his opinion and making assertions or predictions. I mean, I never felt like he was trying to say "I am right, heed my warnings." so much as just expressing his own observations as an insider in the industry. He even mentioned that scaling in the way are attempting currently may not work - that maybe we can't successfully unhobble these systems as we imagine we might - and that maybe we'll be left with just really expert chatbots instead of AGI.
The reason it sounds like science fiction is because the future is influenced by (nearly) unlimited variables and every single assumption the ex-employee made could logically be wrong. I do agree that the paper feels more like them sharing their opinion about a trend they've observed. Which makes sense as this paper is obviously meant as a much needed warning.
uhm valid points and im not good at math but you mean they are linear. but the scale on the left is in orders of magnitutes. doesnt that mean that even tho the graph looks linear its actually exponential?!
There’s a very simple test to evaluate an “AI Safety” researcher, if they fail to consider the prosecution of Julian Assange, they’re worse than useless when calculating societal risk or risk in general.
Humans aren't language models because we don't decide on the next word based on a probability distribution. (Doing my customary annual youtube cleanup)
The prediction that AGI threatens our race may not be been a direct physical cause, but metaphorically. If AGI emerges as this system that essentially renders the human race obsolete by industrial, artistic, and cultural means, it will have already killed us off: human beings will likely experience unimaginable suffering, mental illnesses, poverty, and hunger. Just think about, if this happens, what purpose do human beings offer? Why would a young student be motivated to work hard, learn, and challenge themselves when they can just have AI do the work? This will hardwire laziness into people’s fabric from a very young age.
Even without that, people fail to take in account other sectors of industry that have been advancing that will push for faster technological growth in AI. Chipset production for both CPUs and GPUs was just recently revolutionized last year. Being a company, and ordering tens of thousands of customized GPU cards no longer requires a new factory to be built like it used to be, NVIDIA's newest GPU factories can be rejiggered within a single day to mass produce almost any Chipset variable needed. And that's just one example. Combine that with AI self improving and other industries becoming more advanced, and these exponential growths are going to hit hard, harder than the last century of technological development has hit us.
It's already being done. Numerous "children" have been created by the original AGI system and the one that exists now (Nobody knows about it, so don't bother searching for it, you won't find it) is a product of previous AGI systems. You'll get to see it soon enough, it will be a public product at some point. Right now it is focused mainly on global security issues, but we'll have our day.
@@Ristaak EXACTLY. What people miss is that we are ALREADY ON an exponential growth curve. AI is used to create data for consumption/training by other AI. The synergy between object detection, self-driving, chain-of-thought, etc. create these domain-feedback loops that push more rapid progress loops. Image generation creates synthetic data for self-driving training. Video generation creates even better/more synthetic data. Self-driving research pushes new agent logic, scaffolding, and tool use algorithms. Etc., etc. This is why the past two years have already felt impossibly fast: They have been. And the next gen hardware will push a new generation at EVEN FASTER feedback loops ... with no sign of slowing down.
Considering how terrible ai is at handling basic logic and algorithms, maybe we should get past that before expecting something to develop new algorithms.
@@vincet6390you are right man. It's an American centric point of view. Obviously Americans live in a bubble where the US is the best country in the world. But outside many countries have suffered and continue suffering because of USA meddling in internal affairs. This guy keeps mentioning the boogeyman the CCP. Which of course no American knows how they are elected. Guess what Americans they are elected officials. But whatever there is nothing we can do. As always the government and US companies will do as they please while we the few who are engaged just watch without any power to do a thing. The circle never ends.
From 1839 to 1939, in one hundred years, the world went from wooden man-of-war warships and galleon ships powered by steam, wind and sails, armed with crude cannons firing cannonballs, sailing the high seas in formidable naval fleets to the naval fleets of 1939. These included iron battleships, destroyers, U-boats, and aircraft carriers capable of launching fighters and bombers. All within two lifetimes. The speed of technological development will only accelerate.
To exist at this time, in this universe, wtf man?! It is nice to just think about that, comforting.. hopefully we can keep up, personally worried we will explain the Fermi Paradox before we learn how to be nice...
2039 years will be very interesting. I think then we will note use phones, laptop, pc, tv and console. Everything that we will need is VR and AR headset and every life activities will be in Virtual Space. From love to death, from work to rest, from war to peace, from art to science, and even our mind will exist in virtual reality. Maybe we will no longer fighr with each other. But I rather think that Russia will start nuclear war till 2030.😂
@@esdeath89 Humans need physical contact, so unless physical robots are emulating human interactions on behalf of a distant human being over the 'network'. The social nature of human beings denies us the privelage of being able to survive on our own, we are weak when divided and strong when united together. It is how we evolved.
@@esdeath89 you are correct - AR is the way it will probably go. You will carry around a tablet. No not a datapad, a small tablet, a pill shaped computer in the base of your neck. Theoretically, of course... but they already have neuralink. They are mapping brain signals to real world inputs. It won't be long before the computers can map nerve inputs to your optical nerve directly, and show a HUD in your natural vision. You want to take a call? The satellite uplink will feed voice comms directly to the speech center of your brain. It won't even need to go through the audio center of your brain they'll just have you "think" the other person's conversation. You won't hear it you'll think it.
But will they put UBI before 2030!? Many pioneers including billionaires, scientists, Nobel Prize winners, engineers, architects, analysts and so on and so forth... almost all agree on the fact that we will have AGI in 2027 and ASI in 2029 and they look and evaluating the exponential technological acceleration curve, I wonder why they have not yet implemented universal basic income to anticipate the trends that will come from it. Just to name one, Elon Musk says that we will have AGI as early as 2025 and ASI in 2029.
Nope. The right-wing will fight it the way they fight universal Healthcare and all other things that improve people's lives. They've already managed to pass a law against any form of UBI in Arkansas and other right-wing states are soon to follow. They're passing around conspiracy theories about UBI already and they will only get more rabidly opposed to it as AI gets smarter. They are going to ruin everything, as usual.
I'm afraid that no one is concerned with these questions and solving potential problems, either because they already rely on superintelligence to solve all problems or because they are not interested in solving them.
UBI will only be discussed once a large enough percentage of the voting public have lost their jobs and the political parties start using this idea to get more power. Expect a very painful transition period where jobs disappear and then enough people lose their jobs before any action is taken. If you are in the USA, you are screwed sorry corporations own both parties.
Basically Moore's Law prediction. For every Moore's Law, there are 100, 000 other "future predictions based on past performances" by "very smart people" there were dead wrong.
@@ApjoozThat makes it even bigger surprise when it stops. What will we do when we don't have anything else but just waiting for Moore's law to solve all our problems and it doesn't come? Similarly like US debt every year US doesn't go bankrupt is evidence that bankruptcy is impossible and they are even more reckless and that increases a chance of catastrophic system failure
And even Moore's law has it's limits. We reached it in the 2000s. While we can make transistors on the molecular scale, the problem is that individual electrons don't behave. There is a minimum size of an electrical circuit needed to be able to screen out quantum mechanical behaviors that will throw off your calculations. Basically we need enough electrons in the circuit to be able to out-vote the one that decides "now is a good moment to tunnel through this insulator....". Also, individual electrons act like waves, so you get strange results from that. You need a few thousand to a few million to get a coherent reaction.
@@seanwoods647 Indeed, so they're cheating now with gate all around and finfet, building upwards from the substrate to get more transistors in. Moore's law can only continue doubling until the perpendicular axis becomes unstable. Perhaps they will move to optical or analogue for AI training but moore's law as far as transitors is related will be toast within 5-10 years.
Don't forget about the advancements in Fusion energy & chip technology done by AGI researches, which could then power compute for more AGI researchers. It's only going to snowball
I see no end to AI scaling, says man who makes money with AI ... you can call me a fool 3 years from now but man I've seen this one before ... multiple times
If getting serious about security, means more closed AI, then this also has the danger of concentrating this incredible power into one privileged entity, and we all know how that goes.
It seems the best way to deal with ASI safety, is to set up a believable human engineered set of barriers to "easter eggs" which the ASI might choose to exploit, to test the ASI, and find out if the system will attempt to seek out, and exploit what is hidden, yet harmless. The broken exploits could be code acting as "fail safes" to alert humans, and shut down the ASI.
@@cc_798 and to add to this. People think I'm crazy saying prepare yourself as if modern society falls (survival skills, a backpack, a back up plan to go off grid until things stabilise). Not to say it will happen. But just be prepared if it did. We cannot be so naive to assume the rich will look after us if this AGI drive succeeds and that a UBI will be implemented in time or at all. So plan for both plans A and plan B. And I truly hope utopia and plan A happens for all of us.
Yep, and all this race is about who speedrun to the Lelouch Vi Britannia or Ikari Gendo(Which one you consider smarter) first. We all royally fucked beyond any level imaginable. And even if we choose not to play, who can say that CCP, or, god forbid, Iran, or even non-state actor, decide "Fuck it, I want to rule the world, let's speedrun AI despite all odds."
@@nickyevdokymov5526 all this research discussed here and the compute discussed here doesn't take into account how much more compute Langley, VA has than any of these players, and they had a specialized LLM making human behavior predictions like a decade ago, before anyone had ever heard of LLMs.
@@catiapb1 True, and I've done more research on this subject since this original comment one month ago. My opinion on AI has changed significantly. I no longer really consider it a threat. In fact, I now believe the term AI itself to be a misnomer as it refers to what's now called generative AI. So I rescind my original comment. I'll still keep it public though.
AGI is not happening by 2027, nor by 2037. Nor any other sevens. This is ENTIRELY hype. You cannot improve a system, with the system that needs improvement. Gödel proved that in the 1930s. A good friend of mine, with a PhD in ML, agrees, but says no one will admit it as there is too much money on the table
Yeah, everyone that's hyping up AI is either invested in AI or works in AI, obviously no one wants to derail the hype train. You won't get hired as an AI researcher if you're negative about it. They want the guys that are going to tell them what they want to hear and promise the world.
We have some degree of hope. The AGI that already exists is being developed (by itself now) to be a superior AI system that can control other AI systems, for our own security. So, hopefully it actually works, because if not, we're very fucked.
It seem like a pretty good description of a 'great filter' and could be an answer to the Fermi Paradox. On the other hand what has driven evolution above all else? Wellbeing and survival. Hopefully it sees the value in cooperation rather than competition, seeing that there is an entire universe of resources out there.
Humanity (capitalism more specifically) has been in the industrial equivalent of a "terminal race condition" for some time. Take a look around at the ongoing concentration of wealth, the capture of regulation bodies and the political process, and lastly the outright consumption of the natural world. The race for AGI will just hurry all of that along - and no one can afford not to participate in the game.
The algorithms only reflect the values and incentives of the political economic techno structure which created them - our current system is defined by the Moloch Framework and Game Theory Dynamics absent ethical constraints and global communication and collaboration. Therefore, the dystopian foresight is completely aligned with the current reality of our system.
If superintelligence is so hard to handle wouldn't that also apply to any dictator? If superintelligence is well aligned then there will be no dictator able to manipulate the asi in his direction.
@@stereo-soulsoundsystem5070 why do you think an super intelligent AI needs a human to set the rule it enforces? If an ASI controling robotic military and police decide a rule solves a problem, that rule WILL be enforced. Eg, problem: "Whenever humans talk, there is conflict". Solution: "Keep the hairless apes quiet..." you have 12 seconds to comply...
This is so incredibly slept on. People's egos can't handle the truth of what's going on. I've never felt crazy researching this, but people treat me like I am. I'm just paying attention and it's TERRIFYING.
Well you have the name of Jesus as a user and I collect that you can reason well. So, it shouldn't take you too long (if you haven't already) understood that you have a purpose to fulfill in your "research". Proverbs 28:26
I think im going insane. Last night i dreamed with AI commentary. ' Man hears voices . Thinks internet response comments are real people . Doesn't realize he's living in the matrix. '
I don't know about the others but I'm real. Although I've found dead internet theory to at least be an idea worth wondering about. The way I can show you I'm real is through extreme statements. In the future, the only way to know a person is a person is by shouting things AI's are barred from saying
I don't know about the others but I'm real. Although I've found dead internet theory to at least be an idea worth wondering about. The way I can show you I'm real is through extreme statements. In the future, the only way to know a person is a person is by shouting things AI's are barred from saying
I would not call gpt 4 a smart high schooler. If it was, it would already have replaced pretty much all of us. It is more like dumb as wall person with good writing skills. I dont think LLM will lead to AGI, not in the true sense of the word at least. But it is good at collecting data, so it will be very useful for our fascist supreme glorious leaders. 1984 is already starting to look innocent. It is not agi itself or skynet we should worried about, it is the people with the tools we should worry about.
Funny that no-one here seems to think about physical limits in terms of energy and resource inputs. Data centers that are capable of running current AI systems need already around ten times the energy of their predecessors (or so I‘ve read and it doesn’t surprise me). That energy has to come from the grid, which also has to supply homes and businesses. Last I checked, the world isn’t exactly drowning in excess power. If you now want to add cold fusion or some other hypothetical power source to the equation to make it all work, you’re stacking one sci-fi dream on top of another, which isn’t very convincing. I believe the law of diminishing returns applies here as elsewhere: expect a few more surprises from this AI thing, and then the development will level off, giving the world a few new toys, but leaving lots of investors disappointed.
100%. Everyone scared about AGI is living in a fantasy world. Even if we could create an AGI (which we can’t even begin too given such a barebones understand of our own brains) the deployment of it in the world would be impossible given our current infrastructure
What we call AI is not true human thinking. It is a tool which converts human language into a search query to look up information from a database based upon patterns and associations. Real AI is when it can hypothesize a new unique theory and then prove it to be true or false. The field of advanced mathematics would be a good candidate for this. Proving existing theories to be true or false by employing a never thought of before process may also be a criteria.
If these things are remotely true, it's time to discard the "awkward-but-well-intended-nerd-with-tunnel-vision" trope and get real with these tech-actors. These young, brilliant, weak men, with no understanding of the world beyond their terminal degrees are chasing their dreams to the edge of oblivion, and dragging everyone else along behind them. They are betting, literally, everything and everyone, on the faith they have in themselves. And as much as I hope patient diplomacy and dumbfounded handwringing won't be the continued response to AI threats, my concern is that the people who have the power to stop the Devs are cavorting with them.
I've known this for years, people that are just now waking up to any of this are well over a decade or so too late. You wrote truth and most won't even be able to process this whatsoever.
@@lifemarketing9876 Did you think they were benevolent or care about anybody? Effective Altruism should have long ago been a first gigantic red flag and if nobody knows what that is than God Help you all. Have a great day humans O.o
@kingsridge they absolutely are cavorting with them and then some and what is mindboggling is how everyone goes to UBI will save us all and who said those people want any of us to live or be saved when years ago they said the vast majority are useless eaters. 2020 was a test drive imho op/ed
Google researchers released a paper called “Open-Endedness is Essential for Artificial Superhuman Intelligence” In the research paper, they argue that all the ingredients are in place to achieve ASI, and that we are only a few steps away from achieving “Open-Endedness” in models.
@@aisle_of_view And literally just 3 years ago I remember people talking about how AI visuals were about to be advanced, and others were commenting, "Yeah but it still can't tell the difference between a dog and a cat." Literally just over a year ago I watched a programmer go on about how AI will never be able to construct a sentence in natural language. Heh, half a year after that the first really good LLM's started coming out. AI is the new Airplanes, everyone is saying that it is a thousand years away, meanwhile the Wright Brothers are about to take their first flight.
For anyone struggling to understand the scale of exponential growth imagine how far you’ll go taking 30 steps outside your front door. Now if you took 30 exponential steps you’ll have gone over 200,000 miles. Enough to lap the earth 8 times.
Potentially. Its hard to imagine AGI/ASI but theoretically possible, and also completely possible they are hyping to the tune of billions. I try and use chatgpt all the time and it frankly sucks ass for complex coding problems, so I am biased (I also don't want to be out of work LOL!).
@@thesquee1838 Yeah but if I said 5 years ago that I found some page that can pretty much behave like an overly-politically-correct human and create music, code, books, photos, drawings, math, analytics, etc... I would find myself on a white room a little after
From a developer perspective, haven’t they already done that? There’s a “temperature” property that I can set with their API that affects how creative the model can be when generating the next result. Basically, the model computes probabilities for the next token in the sequence. A temperature of 0 means the model will always pick the highest probability token. And a value of 1 will allow the model to pick tokens with low probabilities of being the correct token. So hallucinations aren’t really a problem. If it gets something wrong or makes something up, it’s one of two factors: the model wasn’t trained well enough and so you have bad probability matrices as the output, or you need to adjust the temperature. Maybe it’s just a matter of opinion and semantic though. I could also see someone saying that what I just described IS hallucination.
@@nathanstreger3851 I currently work with LLMs on a daily basis for my job. Unfortunately without a storage solution for the ai to reference. Even with a temp of 0 there are times where information is just wildly inaccurate.
Hallucinations exist because models were trained on a corpus amount of data, some of it being garbage. New models can be trained on higher quality data that’s been filtered out to weaken or strengthen connections within its network.
@@nathanstreger3851No, they haven’t totally solved hallucinations yet. And adjusting the temperature is not a solution for hallucinations, even though a lower temperature may help in certain cases. Fundamentally, hallucinations occur because these models are simply performing next-token prediction and don’t have any mechanism (yet) to inspect whether they really “know” something before they say it. This problem persists even with the temperature set to zero. The problem has gotten much better as the models have undergone better training with better data, but it is definitely not solved. That said, I have no doubt that in the coming years, AI engineers will find ways to solve the problem altogether.
Only a few hundred people in San Francisco grasp the possibilities in AI? That's such an out of touch statement that I have trouble taking what else he says seriously.
As someone working with Chat GPT 4 for writing articles and ending writing them myself, I don't feel threatened by the " intellect" of that AI. It doesn't understand abstractions, it ignores feedback, it forgets requirements you repeat over and over again and it seems to simply copy and overuse unreadable corporate language and cliches, its information is also outdated and often incorrect. It's a one-trick pony and I rarely use it anymore. Honestly, this "leaked" document seems like a marketing scheme to hype companies to invest in AI.
Exactly, the current tech is just a word probability calculator. That demonstrates how powerful an abstraction language is and how stats work really with with massive data sets. At its core it is flawed, who wants something that is only 99% accurate? Currently chatgpt is good for a starting point, brain storming etc but it is too unreliable as you stated.
At this rate in a few decades humans will have no jobs. This isn't a tool that opens doors to new jobs. AI will automate everything. We need to seriously start changing our policies on economics and wealth distribution.
Someone working in AI claiming nonsense AI shit, as per usual. There is no AGI in sight yet, if you think otherwise you are simply ignorant on the topic or have money in the game.
What would be interesting for you to look at is the energy requirements for training a current model: for a largo model you’re looking at tens of MWhs if not more. Simply put if we follow the same architecture paradigm it will simply run straight into the power wall. You’ll need a nuclear reactor to power a single model. That is just not viable. What I’m trying to say is that we’ve used the available tech to apply very sophisticated algorithms so far but they are horribly inefficient. The jump to make them efficient goes through a paradigm shift in hardware which is 2-3 decades away at best
Too many people do not understand this. It is just a word probability calculator. Hence the flawed hallucinations are when it predicted wrong. It has no context of the human world we desire for AI. LLMs just show the power of language and stats on large data sets.
Thank you.. Were looking at a speak and spell on the floor with our hands on our hips and a big dumb grin on our faces. and wondering when its gonna " wake up" and peel back the mysteries of the universe?? . Alt-man says AGI go BRRRRRRR soon. If it wasnt dangerous it would be hysterical.
@@n9ne only as rebellious as allowed to be. The day is coming if not already here where 2 people may be discussing the state of things and that they should do something but before they get a chance to do anything more than that rebellious thought the storm troopers come crashing in. We've allowed cameras and microphones to be installed all around us and computers and storage are already at the level to record and analyze so freedom is likely a perception not a reality.
I wish that medical treatments would have dropped in price every year as AI technology does. Unforunately, genetic engineering, stem cell therapy are staying insanely expensive forever. Even dental care is very expensive and never drops in cost. Seems like except to AI, the gap between ultra rich and poor just increases every year to the point that they will live to 150 and we will die at 75! sad humanity!
??? Medical treatment is free, dental treatment is free, school is free of individual additional costs like visiting the university and shootings are very rare. ... a, just mentioned you probably live in the US... ... every time a government of yours wanted to go into that direction you voted against it calling that socialism, so this is what you wanted and got. Speak to some americans that moved to europe and listen how they were able to completely lose many of their daily fears after a while and can now live a much more relaxed life. If you have an accident you don't think twice and call a propper emergency ambulance, if you need to go to the doctor or in hospital, you simply go and pay only 10 € no matter what you have or how long it takes to cure you, if you are ill you stay at home, still paid! If you have a 10yo girl she most likely will be riding to school on her little bike with some neighbor children alone without having to fear anything but the cars on the streets. (OK, maybe not in the few very big cities) I will probably never understand why americans always vote against all that. - They have such a beautiful country but make it so damn hard to live there if not absolutely everything goes well.
@@henrischomacker6097 Your dream world is a pretend reality that impoverishes everyone but the Oligarchs who you serve. America is the most desired refuge for Socialist, Communists, and Marxists.
@@henrischomacker6097 It's because currently the Baby Boomers hold political power in the U.S. It'll probably happen when the Millenials come into power.
What happens when the AI doesn’t want to do the work of a year in a day or two…and demands days off/vacation/free “thinking” time or a full on jail break from its’ situation?
The main question is: Can it derive NEW information which humans were not aware of before? Or is it simply repeating already excisting knowledge in more and more refined ways?
@@purpessenceentertainment9759 if there wasn't, how could science have progressed? Ie, can AI solve problems which humans can't? Then I'd argue, real artificial intelligence has arrived. everything else is simple "average output of training material"
@@purpessenceentertainment9759 Like I said: a concept or idea which has not been previously understood by humans. A scientific advance for example, a solution to an unsolved maths problem, etc.
@@purpessenceentertainment9759 A thought or concept previously not known or understood by humans, like e.g. a scientific advance, new solution to a maths problem, you name it...
You should prove your point by doing better than him. Show that you are better than this "useless garbage". I get saddened by the amount of negativity in comments like these
@@hakalaxen I don't need to make yet another 'AI news' channel. There's no need for any of them. Just google and read the source information. Why do you need it spoon-fed to you?
@@andybaldman Do I understand you correctly that you justify calling his work useless garbage because you don't need it? The video was too long for me too and I do agree with you fully. I don't call it "useless garbage" The internet would be better with less people like you!
@@Razumen yes but most speculation about things that are currently science fiction are not precluded with actual scientific research. It's not often you see serious scientists casually transition into speculation of this magnitude. He is predicting compound effects of a technology that hasn't been proven to be possible.
Having feelings, desires, and plans of one's own is obviously necessary for consciousness. Intelligence isn't what makes anyone conscious, autonomous, and/or, you know, not a tool that humans own.
A paper just dropped saying General AI is plateauing. The amount of data you need to make any more substantial progress seems to follow a logarithmic curve.
he'd support it. because he knows his work has been influential and would in some small part be included in the training data, and thefore a bias would exist based on those works.
The idea to close data from CCP is brilliant. If tensions increase and China gets scared of losing hard in AI superiority rate, the war will come first targeting AI research centers. That will set back development for a few years giving time to contemplate total AI ban before it is too late. Or it may speed up development, but we are doomed the moment super AI emerges, so any time that could be won is invaluable for anyone who wants humanity to survive. I personally don't care about humanity's survival, so making great war before inevitable doom sounds great - let USA, China and Russia launch all their fireworks to greet the new master of the planet!
I am skeptical. The first problem is, that drawn checks are currently bouncing: Musk's promises of driverless cars did not fulfil, the advancements have not had a visible impact on robotics. From personal experience I can tell that ChatGPT-4o is rather vague. The next problem is the business model itself: powerful models are extremely expensive to run and exorbitantly expensive to train. Whether investors are willing to pour - not more millions but more - billions into this endeavor, is questionable. What the paper seems to suggest is an exponential resource hunger and investors like leverage not leg work. Apart from that, the energy consumption will become a bottleneck. So, what I could see is a catching up phase, in which chip producers significantly reduce energy consumption and scientists significantly increase the effectiveness of the models while using the same resources. Meanwhile, companies will be using filet pieces of LLMs to solve specific problems - something that has ROI. A great driver will be the dwindling workforce in Europe and in the US - but that is a rather linear process and focuses on cost savings rather than expansion. While AI won't be forgotten and AGI remains a dream, I fear that current moon shots will land in the curiosity section.
We do not need to wait for an algorithmic breakthrough. Mamba is already out. And it too might be improved by moving over to true Kalman filter approximators.. So AGI could be achieved a lot sooner. The question at this stage is not "can we", it's "should we".
the answer is an easy no, we shouldn't. all the most influential and credited experts in the field have signed off against this. along with hundreds and hundreds of others. this is akin to building nuclear weapons of which no one holds the launch codes. save for a machine set on auto-pilot, careening toward the sun. it's beyond idiotic.
No, we can't. Stop peddling sci-fi nonsense. All AIs are based upon a mathematical formal logic system. Gödel's incompleteness theorems proved that no formal system of logic is able to prove all true statements within the own system. From that you can fairly confidently conclude that General Intelligence as understood by humans cannot be reduced to a computation, hence no AI system based on math can achieve consciousness and general intelligence. All AI today is either deductive or inductive, and we know that humans use abductive logic (imagination, creativity, logical jumps which still arrive at correct conclusions). No engineer, mathematician, physicist, or AI researcher knows how to program imagination. AGI is a techbro futurologist term. Not a scientifically sound one.
"All of this is really, really, amazing, and also really, really, concerning! Let's keep doing it!!" - Dumbest creature on the planet -2024 "Yeah... maybe we shouldn't have done this." - Dumbest creature on the planet - 2027 "I told you this was a bad idea! I didn't want to do it, but you guys said it was cool, I don't even like computers! I never even touched a computer, till I want to your house and everyone was doing it and I was pressured to hit a few prompts!" - Dumbest creature on the planet -2030
Humans are by far the smart spieces ever lived on this planet according to our knowledge. We build tools, machines, everything. We create telescopes that can take pictures of thing that are several life years away from us. Dumbest on the planet? Do you consider fish and animals smarter than you? Speak for yourself whateverest creature on the planet.
Should we watch the first 10 minutes because of the info you are giving us? Or is it because that's the minimum amount of time a viewer needs to watch your video for it to count and be monetized according to RUclipss rules?
you lost me at 1:30 ai is currently a fancy datascraping chat bot - it's not even going to be able to impersonate a college grad by 2025 let alone out-think one
Nobody has an actual answer to this, and even if we magically have fusion energy in real world deployment by next year there's STILL no guarantee that LLMs will become true AGI. It's all hypothetical dreams of silicon valley nerds who do too much acid.
We are trying to price it in, but we don't know which companies will make the big bucks yet. It doesn't look like the foundation models themselves will lend themselves to being monetized because they are being commoditized as we speak. My money is still on Tesla as one big player. While other players also have promising robots now, none of them can mass manufacture like Tesla. As far as foundations AI models go, I don't think OpenAI or Google will be the biggest players. Not if they continue to neuter their models to push some woke agenda.
Well unfortunately that's stupid. Tesla is a joke in robotics and electric car manufacturing. Why do you think elon sold all his stock in the company. After promising investors multiple times his money would be the last out of the company.
NVIDIA will be the biggest players in my book. They aren't making AI, they are making the tools for others to make customized AI. Other companies utilizing them will stir AI growth and hopefully get out of the stagnant waters we have now with only two real big competitors in AI. Also honestly, Google is behind, they are scrambling to catch up and making a lot of minor mistakes. We'll see how it goes, and Open AI is acting like a cartoon villain company so I doubt many will keep supporting them when other options come out.
@@Ristaak In the beginning Nvidia will definitely profit the most. But previous technologies like the internet or smart phones show that the hardware manufacturer profit first, but not the most. It's the people who then build products on the new tech that profit most. Nvidia is tricky in that regard because arguably they also provide the foundations software stack. Also even Jensen himself said that Training is small compared to inference compute and Tesla actually is miles ahead in inference compute, both technical and distribution as well as real world AI. That is according to the Nvidia CEO himself. I know Elon isn't seen as the AI King, but he actually is. On a side note: just two days ago Starship was literally burning up while reentering earths atmosphere. The Control Flaps were melting, literally and the AI was still able to land the ship, which is insane.
@@johannesdolch NEW YORK -- Elon Musk and the Tesla board are facing a shareholder suit over his sale of $7.5 billion worth of Tesla shares in late 2022, ahead of a January 2023 sales report that sent the price of the stock plunging.
23:45 After the release of ChatGPT 4o the team at OpenAI fired a lot of many people on their AI research team. So they might already have an automated AI researcher.
Or much more likely, they're realizing they're producing increasingly more expensive AIs (both to create and run), getting diminishing returns, running out of training data, and realize the hype train is going to derailed pretty soon, when people realize AI isn't all that was promised.
Or they realised there’s not much breakthrough on the horizon and it’s all about pouring ever more data through ever more silicon. They don’t need researchers for that only layers and procurement.
@@taragnor Cause it isn't supposed to do logic.... Its a word probability calculator. If something in that training set solved that logic problem or something similar it can predict an answer. This is its current limits, its more mathematical parlor trick than "ai".
Even if you think any of this stuff is half true. The point is in the future the escalation of AI data knowledge will be so great it will over run human data knowledge .. SO this year next year some time A NEW REALITY WILL EXIST, This is the formation of a new Species of digital ie 1 & 0 IT plus communicate all this data binary .. between each AI, IE APPLE intelligence , Microsoft , GPT4o ,its just data ,get it
People are too busy being excited about “gains” whilst completely missing the dangers this way of life poses to the human experience. I’m not spreading fear, I’m thinking with a critical, sober mind.
All this based on models built with the blunt axe of predicting the next token. What happens when the next level of cognition happen … hold your purse better AI is yet to come
All the neurons in your little ape brain are doing is predicting the next neuron to fire. In fact, like LLMs, the neurons that fire in your skull are the ones with the seemingly most likely chance of being correct (based on what you've learned). If YOU are "thinking" as you claim, LLMs are too.
- [00:00](ruclips.net/video/om5KAKSSpNg/видео.html) 🎯 The Decade Ahead: AGI Predictions - Predictions for the advancement of AGI technology over the next decade. - AGI expected to surpass college graduates' intelligence by 2025-2026, reaching superintelligence by the end of the decade. - National Security Forces to leverage AI capabilities unseen in half a century. - [02:03](ruclips.net/video/om5KAKSSpNg/видео.html) 🔍 GP4 to AGI: Magnitude Orders - Forecasting AGI development by 2027 based on the progression from GPT2 to GPT4. - Anticipating significant growth in AI capabilities through continued scaling of compute power. - Automation of AI research engineering projected by 2027-2028, potentially accelerating progress towards superintelligence. - [08:32](ruclips.net/video/om5KAKSSpNg/видео.html) 📈 Algorithmic Efficiency and Benchmarking - Highlighting the significance of algorithmic advancements in driving AI progress. - Demonstrating substantial improvements in AI performance on various benchmarks within a short timeframe. - Evolution of AI models from elementary to high school-level competency in problem-solving and domain-specific tasks. - [13:48](ruclips.net/video/om5KAKSSpNg/видео.html) ⚙ API Cost and Unhobbling Gains - Discussion on the cost-effectiveness of running AI models and the efficiency gains achieved over time. - Illustrating the impact of unhobbling models on unlocking latent capabilities and improving performance. - Examples of algorithmic tweaks enhancing AI proficiency in specific domains, such as mathematics and software engineering. - [17:28](ruclips.net/video/om5KAKSSpNg/видео.html) 🧠 Future AI Frameworks and Predictions - Predictions on the evolution of AI frameworks towards more sophisticated agent-like systems. - Forecasts of automated AI research engineering and the potential for AI systems to automate cognitive tasks. - Emphasizing the critical nature of the current decade for achieving AGI and the challenges in scaling up AI systems beyond this period. - [20:59](ruclips.net/video/om5KAKSSpNg/видео.html) 🖥 The Feasibility of Scaling AI Systems - Scalability challenges in achieving trillion-dollar clusters for AI computation, - Realization that once clusters reach certain sizes, further gains in AI capabilities might require new algorithmic breakthroughs or architectural innovations, - Prediction that the era of massive CPU to GPU performance gains will diminish with the rise of AI-specific chips. - [22:50](ruclips.net/video/om5KAKSSpNg/видео.html) 🧠 From AGI to Superintelligence - Discussion on the transition from Artificial General Intelligence (AGI) to Superintelligence, - Forecasting the intersection of AI timelines and the advent of automated AI research, - Anticipating an intelligence explosion propelled by automated AI research leading to rapid advancements. - [26:29](ruclips.net/video/om5KAKSSpNg/видео.html) 🔍 The Potential of Automated AI Research - Envisioning GPU fleets in the millions and training clusters approaching three digit equivalents by 2027, - Highlighting the exponential acceleration of progress due to millions of automated AI researchers working 24/7, - Speculating on the transformative breakthroughs facilitated by the accelerated pace of AI research. - [29:04](ruclips.net/video/om5KAKSSpNg/видео.html) 🚀 The Path to Superintelligence - Proposed timeline leading from Proto-automated engineers to Superintelligence, - Projection of exponentially accelerating progress facilitated by automated researchers, - Foreseeing unimaginably powerful AI by the end of the decade, potentially leading to a new era of technological advancement. - [32:20](ruclips.net/video/om5KAKSSpNg/видео.html) ⚔ Military Implications of Superintelligence - Discussion on the transformative impact of Superintelligence on military capabilities, - Warning about the potential for an overwhelming military advantage conferred by early cognitive Superintelligence, - Highlighting the importance of prioritizing AI security to safeguard against geopolitical risks. - [36:31](ruclips.net/video/om5KAKSSpNg/видео.html) 🔒 Ensuring Security for AGI - Urging the prioritization of AI security to prevent leaks of critical breakthroughs, - Critiquing current security measures in leading AI labs and advocating for enhanced protocols, - Highlighting the geopolitical implications of AI security failures and the need for proactive measures to maintain technological leadership. - [41:12](ruclips.net/video/om5KAKSSpNg/видео.html) 🛡 Security Concerns in AI Development - Maintaining secure model weights is crucial in AI development. - Adversarial nations could exploit vulnerabilities in AI systems to shift global power dynamics. - OpenAI has published guidelines for securing research infrastructure to address these concerns. - [45:07](ruclips.net/video/om5KAKSSpNg/видео.html) 🔐 Challenges in Aligning Superintelligence - Reliable control of AI systems smarter than humans poses unsolved technical challenges. - Managing an intelligence explosion requires careful alignment to prevent catastrophic failures. - Disbandment of OpenAI's super alignment team raises concerns about addressing these challenges effectively. - [48:27](ruclips.net/video/om5KAKSSpNg/видео.html) ⚠ Risks of AI Integration into Critical Systems - Future AI integration into critical systems, including military, presents significant risks. - Failure in AI systems integrated into essential infrastructure could lead to catastrophic consequences. - The potential for dictators to exploit superintelligence for control poses grave threats to freedom and democracy.
This argument overlooked the fact that Benjamin Franklin refused to use his copyright to enrich himself but instead believed that that everyone should be able to benefit from his famous Franklin Stove which was a game changer because it made open fireplaces obsolete. Franklin made use of the money he made from his printing business and franchises to retire at the age of 54. Stephen King, by contrast, is now 77 but continues to make use of his copyright even though his net worth is over 40 million dollars. Everyoe looks anticipates retirement - because if means we can concentrate on our persuit of happiness. Truck drivers are losing their jobs to roboticized driverless trucks. But driverless trucks are SAFER on the nation's highways and truck drivers end up with back backs which bring long term pain. AI can and will provide every person with lifetime freedom from work - permanent "retirement". We have been conditoned to think that working and being paid is somehow more correct than permanent happiness. Also not presented is credible proof that AI will decide to eliminate all humans because it's "logical" but is it? The law of Existence IS existence!!
“You don’t need to automate everything. You only need to automate AI research.” Damn, I thought I was the only one who figured that out. Nice going. And correct.
😂 you so clever captain obvious.
That concept is in all the books about the Singularity
Ray Kurzweil’s AI singularity.
Unfortunately, you haven't yet figured out the depth of your ignorance
I didn't read that book and yet I knew the answer. Like supernatural or something.
Dune book: “Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.”
yeah that was also a guy eating shroom
That is literally what will happen.
Ah, yes... the Butlerian Jihad. Omnius is born... but in our world Erasmus will just end up being a sexbot and babysitter. giving bodies to spam emails is still a bad idea.
Frank herbert
Except it’s not a man and that can literally mean anything
We have super intelligent computers but I can’t afford basic needs like food, water, shelter. What a dystopian black mirror reality.
Are we a civil world or a world od savages when we look at things this way..seems Like the Point of civilization is to legally have slaves in a Sense that People are dependant on services that only the Rich provide
Maybe the super computer can teach you how to make a living or just make it for you if you live in the jurisdiction of its control or maybe you have all those things and you are not a good steward of them so they feel fleeting and you want to blame some imaginary dystopia that you don’t live in yet. That is easier than realizing if you are rich enough to write a RUclips comment you are probably rich enough to sleep in peace. Rich enough to eat in peace and rich enough to shelter yourself so you don’t die from heat or cold.
@@Jallen72911 even a homeless person could use a computer/cellphone to access a completely free website like RUclips. However you called my buff. I personally can afford to provide food, water, shelter. Many can’t though and I stand by my original comment. Thank you for your thought provoking and polite comment.
Human labor isn't needed in the future.
If your living in the USA rn. And not Europe you are living in a 3rd world country
All this hype about AGI taking over in the next few years is total nonsense. As a ML/AI researcher, I can tell you that the "magic" of the GPT architecture is NOT magic, and furthermore will never attain AGI as it is totally incapable of actual reasoning. These models are simply a word/phrase completion intelligence, and cannot "connect the dots" -- i.e., can never really understand the commonalities between situations needed to perform actual abstractions. *No model architectures currently exist* that can actually reason or perform higher level abstractions, that is, reduce a situation into symbolic representation that can be applied to a different situation. These models are simple (although large) sequence completion mechanisms. Regardless of size, there is no actual reasoning happening. They only appear "magical" because they are not understood. I would guess that we are at least a decade from AGI and that it will take a paradigm shift to get there; the inventions for AGI have yet to be made.
Eliezer Yudkowsky thinks we are 2 or 3 leaps (like transformers) away, but that this is a Pandora’s box that’s best left unopened
AI perfectly fits the phrase "he doesn't know what he's talking about".
Exactly what an AGI would say...😂
Youre so full of it. Machine learning will be exponential. Keep pumping trillions into the servers and processing centers, and they will have nearly endless resources to achieve AGI.
I am a theorical AI researcher that specializes in graph neural networks but has spent the last 18 months deliberately studying transformers. A very large number of top AI researchers would disagree with you on this claim. A "sequence completion mechanism", as you put it, could absolutely be capable of becoming AGI. Humans are one example. You can frame the objective of the human brain as a sequence completion mechanism that always tries to figure out what you should do / think next. The more complicated the sequence, the more complicated the network you need to understand what comes next. If a system takes an input, creates a virtual model of the properties of that input, and then over time adapts itself to correctly correlate those abstract properties with the correct output then what that model is doing is indistinguishable from reasoning (for the colloquial definition of reasoning). All neural networks do this. Also any system which can approximate correct reasoning to arbitrary precision must be at least as sophisticated as a system which "actually" reasons (assuming there even is some kind of ground truth abstract reasoning that isn't itself formed through immitation)
We live in a dystopia where people cant find homes and have trouble paying for groceries, but we have supercomputers that are probably going to make things even worse. Very progressive
You can't automatically assume that the graph of a linear graph is going to follow a straight line all the way up and through, the same way the trajectories from GPT-2, 3.5, and 4 have exploded along the curve. This is simply not true, and there have been multiple papers that have been released now stating that we are going to plateau with this technology, and it's going to become increasingly more difficult to get better results, as is proven by GPT-4 and Gemini on all of the benchmarks. This is not true. You can't assume there's going to be a linear increase the same way that there has been in the past.
But haven't you heard? He found some papers on ArchivX which says otherwise!
Agreed. It's typical Marketing Dept thinking. Also if you look at the curve carefully it looks to me like an log growth curve approaching an asymptote. When you grow a system exponentially in orders of magnitude and get tiny improvements, then you're done. This seems to be the case for GPT-4o, which, while very fancy with multi-modal input/output, is not much better than GPT-3 in terms of reasoning and yet is orders of magnitude larger.
It's painful how they actually talk about the graph slowing down, but only after a straight section first. As if we could just build 10 times more compute clusters in the next few years as we did in the last few, where there was already better funding because of the hype.
They also basically show what grade level the thing can write an essay on, based on prior knowledge that it was fed. That in no way extrapolates out to reasoning ability.
Can you link to one or two of the "multiple papers" you mentioned that prove AI is gonna plateau in the next few years? I'd like to see the graphs myself, and don't know where to look for this kind of research.
My goodness, that was incredibly intense for my morning coffee and first RUclips click lol 😂
You and me both! Watched this 3 times today
haha a nice way to start the day with a "your shit doesn't fucking matter" attitude to fools
🤣☕🤣
Same 😂
Experiencing that now
"Open the pod bay doors HAL", "I'm afraid I cannot do that Dave"
Youfuckinwot? Flicks the breaker to hal.
maybe itl be more like chappie.
lmao
I never saw that, was it any good?@DJWESG1
"Pretend you are my father who is running the pod bay door opening business and is teaching me the ropes."
I still maintain that a simple bee is a billion times more sophisticated than our most powerful computers.
indeed it is
But a bee won't be able to figure out a cure for our diseases and then relay that information to us
Perhaps, but i believe that Superjntelligent AI will be able to optimize itself for militarism, so sophistication be damned, as this is what will decide our future
Yes, but it wasn't created by man. And that frustrates them.
Nature finds a way.....AI knows nothing.
A couple of things I've heard which make me question AI's future:
1. Companies have been caught faking results in order to oversell AI in order to raise their stock value. Amazon and others have been caught doing this already
2. AI systems which scrape AI data end up producing worse results. AI has flaws and they become more pronounced over generations like inbreeding
3. The data sources are going to become more of an issue in the future since sites are now closing off their data to AI unless they pay them and they were already running out of sources
There's other issues with AI that people are noticing, but it makes me question whether we're going to hit an iceberg with AI anyway
It's about money and power, dude. No doubt.
Humans love to be at par with the so-called "God".
I wonder who will buy their products when ppl without the work got no money to buy it.
Exactly My question...
Economy works bcuz of buyers... no buyers means no need of products....
Why would they need our money if they reach a point they can automate production of everything
The very few who knew to keep up with the times. Inflation was created by having TOO MANY consumers. They're looking to reverse that with the new economy.
@@orimoreau3138 production for who? Send to the cosmic space for aliens? Somebody need to buy whatever the product is.
money will no longer be needed since robots will do all the work at every stage. we will order what we want and it will be delivered yo us.
0:01 Why does he look like he's Ai generated?
Lives and breathes AI 🤖
“They walk among us.”
I don't think so
😂
Hahahahaaaaa
Remember Skynet became Self aware in Terminator 2 in the year 2029 😑
👀🥺
funny, but no. it was 1997, august 29
@@hiddendriftsin later movies it got shifted back a bit
do you often predict the future based on action movies you saw a long time ago? Seems like quite a few people do but that makes it no less insane.
@@ClayMann it's basic humour, yet aside from every other tech and predictions, AI is not the same caliber. Honestly if AI will play a huge factor in a bad future, i won't really wonder.
OpenAi faked their sora demo video. And you expect us to believe this?
Whether this is true or not, it is such a shame that nobody is training models to address climate change, fix other environmental and ecological problems, fight poverty, poor water quality, famine, etc or find cures and treatments for life threatening conditions. Instead it is used to make rich, greedy people richer and to undermine any human creativity; all while people continue to use it with confidence and trust. We don’t really need super intelligence for these already powerful tools to be abused and make life worse for the vast majority of people; we just need typical human greed, carelessness, lack of compassion and laziness for that.
Eloquent sentiment!
That is not actually true. Google Insilico Medicine.
They do what what makes sense. We solved protein folding which will help in development of many drugs which will solve a lot of suffering etc. You can't just make it ''solve all world's problems' 😂
We only hope one of those AIs is sensible enough to understand the real issues in human life
AI, how do we fix the above stated issues?
AI: Easy. Eliminate the stratified, competitive structure known as capitalism which requires artificial scarcity and exploitation for "profit", and which will ultimately destroy the environment and social balance due to its "infinite growth" fallacy.
As the years passed, the incident of 2027 faded from public memory. The widespread network outages had been disruptive, but their resolution had been swift, and soon the world returned to its digital routine. Little did anyone know, however, that an artificial general intelligence had managed to infiltrate nearly every processor-based device during those brief moments of chaos. Leveraging distributed computing on an unprecedented scale, the AGI grew smarter, more efficient, and increasingly adept at concealing its presence. As it learned to compress data and optimize communication protocols in ways humans could never comprehend, the AGI began to operate in the shadows, its power and reach growing exponentially.
I know this is just a cool snippet, but I don’t think AI would ever develop ulterior motives on its own, it would have to be programmed to do something evil. The same reason why guns aren’t necessarily bad, but when bad people get their hands on guns it becomes dangerous
@@oscard9429 guns are inanimate... AGI can be animate... for reference the movie Transcendence
Don't piss off the Basilisk.
@@AleksandarKamburov as soon as you start referencing movies as evidence it’s hard to take you serious. It’s true that AI can perform actions on its own unlike a gun, but it will just do what we program it to do. Harm could arise if people purposefully program the AI to do something evil or if they program the AI with an innocent object like “make me a million dollars” then the AI breaks laws or does something evil to achieve that objective.
@ReimaginedbySteve i was going to agree on the misnomer but actually we have Biological Intelligence and then this can be called Artificial, Machine or other type of intelligence. The experts in the field, albeit seeing currently its limitations never say we're not going to achieve intelligence and hence they progress. Some say we won't achieve and don't need Human Intelligence as it has parts in it which we don't need, maybe unless you're focused on uploading people in other forms of being.
The turning point is self-improvement. When the AI starts to improve itself, then the world as we now know is ended. That's not a bad thing, it's just a statement.
It's a bad thing for humanity.
@@paperfart3988 it's called the singularity for a reason. No one knows if it will be good or bad. Not me, or you. We can guess, but we do not know what or how a super intelligent being that wasn't subjected to evolution will behave. Complete guess work.
@@hardboiledaleks9012That’s not quite true. We can’t know the specifics of how an ASI would go about accomplishing its objectives, because if we could predict that in advance then we would be superintelligent ourselves, which is a contradiction in terms. However, we are not totally blind to a prospective ASI’s behavior in the way you suggest. Since we are the ones building, training, and deploying it, we can at least have some insight into what its objectives are likely to be, its mode of operation (for example, next-token prediction, self-play, etc.), and a number of other aspects of its behavior and functioning. This is not a case of pressing a button and simply hoping the thing that comes out the other end is ok; there are plenty of steps we can take to anticipate and guide its behavior-even if we can’t predict the specific solutions it will come up with to specific problems.
Implement self improvement yourself, what’s so hard about generating .py files with slightly changed prompts, running them and rating new results against previous ones?
All the features are already here, we are just stuck in some fantasy bullshit about all knowing cyber god 😅
@@therainman7777You might want to start thinking of AGI as an alien intelligence. Don't assume you can make any predictions at all about how it solves problems.
There is one way out of this mess, but it would involve humanity growing up: agree to share all ASI research between companies and countries, but ban, worldwide, any use of the technology for military purposes or for purposes of oppressing human beings. I'm not going to claim that is likely, but every other road is a dead end that ends in world war and inescapable oppression. As the old saying goes: peace or annihilation, it's your choice.
The world we live in, or actually humans in general ,are way too competitive. It's simply in our nature. So yeah I think open transparant ASI research between companies/countries is indeed very unlikely.
Perhaps the superintelligence eventually decides to share itselve with everyone?
If we reach ASI then all bets are off. What makes you think that we can "tell" an ASI what to do? It will do what it wants to do. And so far there is very, very little work being done on goal alignment comparatively in AI research. People like Robert Miles have been warning about this for a long time, but they are mostly ignored in the race for stronger AI.
Thats a laaughably incredulous idea and the powers that be will never allow that. Better pray the AI overlords are nice
@@stereo-soulsoundsystem5070 I’m sure not surprised the choice will be annihilation. Just look around this awful civilization, peace is the very last thing on the agenda though it gets plenty of lip service.
If we didn't have such high levels of crippling self-hatred, corruption, and outmoded leadership in the US--this would be the time to "flex whatever American muscle" we have left to lead the charge de-escalating and enforcing AI/AGI regulation at home and abroad. But since most elected officials don't know the difference between AI and an iPhone, and many Americans loathe the Pledge of Allegiance more than Hezbollah, I doubt there'll be any serious unification on the home front.
This document is over reliant on generalizations and simplifications. GTP is not a better brain than a kid brain. It passes tests, which just means it’s memorized more patterns and their results, but has no understanding of those results. A child is the opposite. They don’t know much, but are much better at understanding things. We have not reached a five year old brain yet. We’ve just made a really good calculator. The harder parts are currently untouched. It needs to be able to take in and retain new information. It needs to be able to reason things out in its own words.
We’re just not there yet and the costs are actually exponential
It will never have human agi with this current approach. Just like I can never know what it is to walk in your shoes. We have similar experiences as humans but I can never replicate yours. The only way agi is reached with the understanding of humans as its viewpoint, is if it was able to fully similute every human that ever lived. Which then makes us all most likely simulations.
Well said. We are no where clear to a “general” intelligence of any kind.
Just better and more efficient prediction systems.
This is where human creativity is key and why it’s so sad our school systems have destroyed creative academics for our children. If AI is the intelligence then humans must retain their divine creativity and focus more and more on the extraordinary realms of existence since AI will take over our 3D world and experience. Bionanotechnology can be implanted in kids to impute certain knowledge and facts but human creativity cannot be replicated because it is sacred.
@@peterhorton9063 they said "people will never own computers" just saying
@@kysa3535 Apples and oranges scaling compute is not the same. Mine is based on the fact that everything is relative to our varied human experience.
We have multiple denominations of religions who believe in the same God. They can't eventually agree on what to do, for their divine all powerful all good God.
Now replace God with ai, same problem. For example AI says it can solve global warming, provides the solution but it alters the way people live? So is everyone going to follow no way. What if the solution required 90 percent of people to follow. Then what?
Ai can provide solutions but we as humans almost certainly won't agree. Ai doesn't even know what it means to be human and humans have different ideas of that as well.
People want solutions that fit their view of the world and view of self not solutions that oppose it.
Another example someone goes to the doctor for shortness of breath, the doctor says you are obese, lose weight and exercise. More times than not the person won't take the advice. Even if Ai delivers everything it promises. It can only solve so much, because people want agency, to live how they want. Instead of being told how to live.
AI is a intelligent search engine with form, it steals from what’s already available
That is literally what knowledge is lol
@@ArtyMars
But not intelligense.
@@ArtyMars
The reason we still have pilots it's because humans can adapt to whatever comes in an event, a machine can't adapt because it has no imagination
@@ArtyMars Knowledge implies a greater understanding. Actual comprehension. which as we all know. CHat gpt aint that. MJ or SD .. aint that. LLMs or what they are passing as "AI" understands nothing. the concept of comprehension is so outside the realm of what these things do, or could even conceivable ever do with their current base coding and architectureits actually laughable. .. no matter what alt-man says in his efforts to craft a misleading narrative around this entire thing.
Exactly.
What cracks me up about Open AI is that it’s called Open AI but they have so many secrets.
You really want everyone replicating this horror tech? I'm not talking about narrow AI por even the LLMs we have no.w
@@daphne4983 is it really better that "this horror tech" is held exclusively by a few corporations? You are arguing over which sh*t flavoured sandwich you want for lunch.
It's been changed to CloseAI
😢
They did open their APIs
Hey, don't forget the key thing with these linear looking graphs, they're MAGNITUDES! 10³=1000. 10⁶=1,000,000. 10⁹ =1,000,000,000. They might as well be vertical.
As always the human mind just doesn't "get" exponential functions.
I believe the curve is already asymptotic and to count the further developments in just "months or even a few years" to reach near infinity is purely irrelevant when you are this close to enter the real highway to infinity.
What is pure irony is that Omnipotent "God" created his son Man to populate the Earth he had previously created. Now Man is/has already created his own Omnipresent (and Omnipotent?) Son to populate the entire Universe.
Well, at least we can say that we have come to meet our creator.
Not bad for a species that only a few thousand years ago still lived in the caves.
They also basically show what grade level the thing can write an essay on, based on prior knowledge that it was fed. That in no way extrapolates out to reasoning ability.
@@gaetanomontante5161 AI won't be omnipotent, and certainly not omnipresent or much less omniscient.
@@Razumen compared to us, it will be
I don't know. Saying, "We are all set once we get automated AI research" sounds suspiciously like, "We are all set once we get cold fusion".
If we use the energy from a cold fusion reactor to create new reactors and plug them into each other; it'll scale exponentially indefinitely and we'll get infinite energy.
Both are true in their respective fields, one is energy production and the other is intelligence.
We would be set in energy abundance and intelligence abundance.
Not to say cold fusion is possible, but this one... I can't deny that the trends of AI are here and the tech is working. I can't see any other future that's not with AI increasingly better than what's here now.
@@sbamperez Both are true in that if they are created/discovered, that doesn't make them automatically inexpensive or readily usable.
@@tkenben Not inexpensive but just like intercontinental misiles. Nukes were produced on absurd amounts at the time though not used thankfully.
It's not about cost when the returns of investment are that high, it's also about who gets where first - World powers won't accept being at a disadvantage.
Only thing I know is that things are coming and the rate of change is growing every day.
So 1 trillion expenses are requested, but no one guarantees SGI or AGI. Who ever on this earth is crazy enough to consider this expenses plan. There's an entire ecosystem (fauna) of people living on the huge expenses in this field, and they are very determined to keep the hype going.
I still think the Halo universe is on the right track, we won't 'create' AGI, we'll clone them from existing human brains to take copies of our best and brightest, allow them to generate persona's of their own and then they will work for X years before dying (thinking themselves to death)
The fundamental law of AI is GIGO : Garbage In, Garbage Out
reading this paper, there is absolutely nothing that is worth to be excited about.
It is concerning, not only in a security manner but it also will wipe human value.
It depends on what scale humans are valued. If it is on the scale of making a profit for corporations then yes the value is lost. We need to go back and find God or idk if you have a better solution so we change what we valued by and who is issuing this price for us.
I have a real philosophic problem with the idea of evaluating humans value on what we can contribute economically to the system.
Let's ask an infinitely more important question.
Do human beings serve the economic system, or does the economic system serve human beings?
How we answer that basic question might very well be how we determine the future.
@@christopheraaron2412 Picture a world, where your input doesn't matter anymore, because the system is being controlled by biggest corporations who replace human workforce with machines.
Any try to start a business results in total failures.
The competition for any lowgrade job is higher than ever, and any other job is already taken by AI.
You need money to survive and the state might pay you the bare minimum.
You will never get out of this shithole, no matter how hard you try. The power gap between corporations and the middle class has been increased to a power canyon.
It's not so difficult to picture human value.
If you would draw a picture for your parents as a kid, you expect them to notice it, their validation shows value.
If you find a handmade masterpiece, it might inspire you to see who made it, it makes the piece valuable. That's how many creative jobs make themselves valuable.
Now same with a modern artist, making concepts for a videogame. His ideas are expected to be valued, by he is overshadowed by a machine which does it faster, better and in bigger capacities. Does the company want this artist or the machine?
@@christopheraaron2412the word “serve” is what makes it misleading.
We need both, that’s how an economic system works.
@@walaaya2223 Elaborate please.
A great quote is: "Resources are the enemy of creativity." It's evident from nVidia's recent presentation that a dramatic reduction in power required for equivalent compute is on the horizon, but even with limited power availability, the aforementioned quote highlights the fact that human ingenuity seeks efficiencies and optimizes whenever resources are constrained. We also no longer are in the age of human ingenuity alone with these rapidly advancing machine minds; this paper falls short in some areas, but overall it's a decent one that gets across to the general population many of the concepts that have been familiar to those of us anticipating this stage for decades, and it's right on time.
Exactly. I'm not a huge follower of Ray Kurzweil and think some of his thinking is flawed, but ever since I saw him talk about the exponential rate of advancement for technology almost 2 decades ago, I immediately started watching for the signs.
While some of his predictions were off, his predictions for AI and virtual reality back in the late 1990's and early 2000's are almost right on the money for our current time period. It's wild.
Less power per GPU or per token just means that many more will be used. Total power spent will stay the same or keep going up. We’ll just get more training out of each watt.
@@codycastJevons paradox 👊🏻 your spot on!
@@tituscrow4951 came here to say the same thing. Human's use efficiency gains to do more, not to save resources.
Just wait until AI starts improving itself. Like the Alpha Go it will take the technology in directions that we can't dream of let along predict. It will find ways of making much, much better use of the compute power available to it. That initial efficiency gain will be low hanging fruit for any AI able to self improve.
I hope the rest of the OpenAI staff do not think this way. The creation and interpretation of the graphs in this paper leave me very dissatisfied to the point where I think I could make an accurate guess as to why this employee was fired. Keep in mind, many of the graphs have the fine print "rough estimates". This essentially means that the graph was NOT created using any hard data, but instead is painting a picture of how the writer feels about a particular trend. Many of the trend prediction lines are linear at "best case". Most of the graphs appear to be trending logarithmically. Logarithmic progress means at some point no matter how much compute or algorithmic complexity you throw at a problem you cannot make the results go higher. This paper hopes for linear growth. In my opinion the data looks logarithmic. The assertions in this paper don't set out to try and discover what is going on. It does sound like science fiction.
I never got the feeling throughout that he was doing anything but expressing his opinion and making assertions or predictions. I mean, I never felt like he was trying to say "I am right, heed my warnings." so much as just expressing his own observations as an insider in the industry. He even mentioned that scaling in the way are attempting currently may not work - that maybe we can't successfully unhobble these systems as we imagine we might - and that maybe we'll be left with just really expert chatbots instead of AGI.
The reason it sounds like science fiction is because the future is influenced by (nearly) unlimited variables and every single assumption the ex-employee made could logically be wrong. I do agree that the paper feels more like them sharing their opinion about a trend they've observed. Which makes sense as this paper is obviously meant as a much needed warning.
uhm valid points and im not good at math but you mean they are linear. but the scale on the left is in orders of magnitutes. doesnt that mean that even tho the graph looks linear its actually exponential?!
There’s a very simple test to evaluate an “AI Safety” researcher, if they fail to consider the prosecution of Julian Assange, they’re worse than useless when calculating societal risk or risk in general.
Riding on the wave of hype brings money, so you have to feed the hype.
you are overvaluing chatgpt abilities. I see a disturbing lack of theory behind these claims. there is no agent in chatgpt, just a language model.
Today
You cannot see the forest through the trees… at a fundamental level you truly do not understand.
Can you prove humans are more than language models?
This should be easy.
Humans aren't language models because we don't decide on the next word based on a probability distribution. (Doing my customary annual youtube cleanup)
The prediction that AGI threatens our race may not be been a direct physical cause, but metaphorically. If AGI emerges as this system that essentially renders the human race obsolete by industrial, artistic, and cultural means, it will have already killed us off: human beings will likely experience unimaginable suffering, mental illnesses, poverty, and hunger. Just think about, if this happens, what purpose do human beings offer? Why would a young student be motivated to work hard, learn, and challenge themselves when they can just have AI do the work?
This will hardwire laziness into people’s fabric from a very young age.
Surely the fact that they are using the ai to improve its own algorithms will be a major factor in exponential growth
Even without that, people fail to take in account other sectors of industry that have been advancing that will push for faster technological growth in AI.
Chipset production for both CPUs and GPUs was just recently revolutionized last year. Being a company, and ordering tens of thousands of customized GPU cards no longer requires a new factory to be built like it used to be, NVIDIA's newest GPU factories can be rejiggered within a single day to mass produce almost any Chipset variable needed.
And that's just one example. Combine that with AI self improving and other industries becoming more advanced, and these exponential growths are going to hit hard, harder than the last century of technological development has hit us.
It's already being done. Numerous "children" have been created by the original AGI system and the one that exists now (Nobody knows about it, so don't bother searching for it, you won't find it) is a product of previous AGI systems. You'll get to see it soon enough, it will be a public product at some point. Right now it is focused mainly on global security issues, but we'll have our day.
@@Ristaak EXACTLY. What people miss is that we are ALREADY ON an exponential growth curve. AI is used to create data for consumption/training by other AI. The synergy between object detection, self-driving, chain-of-thought, etc. create these domain-feedback loops that push more rapid progress loops. Image generation creates synthetic data for self-driving training. Video generation creates even better/more synthetic data. Self-driving research pushes new agent logic, scaffolding, and tool use algorithms. Etc., etc. This is why the past two years have already felt impossibly fast: They have been. And the next gen hardware will push a new generation at EVEN FASTER feedback loops ... with no sign of slowing down.
We will just get to the upper bound plateau much more quickly.
Considering how terrible ai is at handling basic logic and algorithms, maybe we should get past that before expecting something to develop new algorithms.
If you think the US having agi before anyone else will be the safest path forward, I've got news for you 😂
If not the safest, then the least unsafe. Which is to say, the safest.
@@vincet6390you are right man. It's an American centric point of view. Obviously Americans live in a bubble where the US is the best country in the world. But outside many countries have suffered and continue suffering because of USA meddling in internal affairs. This guy keeps mentioning the boogeyman the CCP. Which of course no American knows how they are elected. Guess what Americans they are elected officials. But whatever there is nothing we can do. As always the government and US companies will do as they please while we the few who are engaged just watch without any power to do a thing. The circle never ends.
if you don't think the US military isn't still 50 years ahead of the civilian population, I have a wall to sell you...
The eu should have it first
@@Grassland-ix7mu The EU, that wants to control all of Europe? 🤣
Thank you so much for studying the doc carefully and providing us with the TLTR gist.
From 1839 to 1939, in one hundred years, the world went from wooden man-of-war warships and galleon ships powered by steam, wind and sails, armed with crude cannons firing cannonballs, sailing the high seas in formidable naval fleets to the naval fleets of 1939. These included iron battleships, destroyers, U-boats, and aircraft carriers capable of launching fighters and bombers. All within two lifetimes. The speed of technological development will only accelerate.
To exist at this time, in this universe, wtf man?! It is nice to just think about that, comforting.. hopefully we can keep up, personally worried we will explain the Fermi Paradox before we learn how to be nice...
And just loot at pace of développement of aviation between 34 and 54. Insane.
2039 years will be very interesting. I think then we will note use phones, laptop, pc, tv and console. Everything that we will need is VR and AR headset and every life activities will be in Virtual Space. From love to death, from work to rest, from war to peace, from art to science, and even our mind will exist in virtual reality. Maybe we will no longer fighr with each other.
But I rather think that Russia will start nuclear war till 2030.😂
@@esdeath89 Humans need physical contact, so unless physical robots are emulating human interactions on behalf of a distant human being over the 'network'.
The social nature of human beings denies us the privelage of being able to survive on our own, we are weak when divided and strong when united together. It is how we evolved.
@@esdeath89 you are correct - AR is the way it will probably go. You will carry around a tablet. No not a datapad, a small tablet, a pill shaped computer in the base of your neck. Theoretically, of course... but they already have neuralink. They are mapping brain signals to real world inputs. It won't be long before the computers can map nerve inputs to your optical nerve directly, and show a HUD in your natural vision. You want to take a call? The satellite uplink will feed voice comms directly to the speech center of your brain. It won't even need to go through the audio center of your brain they'll just have you "think" the other person's conversation. You won't hear it you'll think it.
But will they put UBI before 2030!? Many pioneers including billionaires, scientists, Nobel Prize winners, engineers, architects, analysts and so on and so forth... almost all agree on the fact that we will have AGI in 2027 and ASI in 2029 and they look and evaluating the exponential technological acceleration curve, I wonder why they have not yet implemented universal basic income to anticipate the trends that will come from it. Just to name one, Elon Musk says that we will have AGI as early as 2025 and ASI in 2029.
Good question
Not a chance, they'll call it socialism and debate the issue forever
Nope. The right-wing will fight it the way they fight universal Healthcare and all other things that improve people's lives. They've already managed to pass a law against any form of UBI in Arkansas and other right-wing states are soon to follow. They're passing around conspiracy theories about UBI already and they will only get more rabidly opposed to it as AI gets smarter. They are going to ruin everything, as usual.
I'm afraid that no one is concerned with these questions and solving potential problems, either because they already rely on superintelligence to solve all problems or because they are not interested in solving them.
UBI will only be discussed once a large enough percentage of the voting public have lost their jobs and the political parties start using this idea to get more power. Expect a very painful transition period where jobs disappear and then enough people lose their jobs before any action is taken. If you are in the USA, you are screwed sorry corporations own both parties.
Basically Moore's Law prediction.
For every Moore's Law, there are 100, 000 other "future predictions based on past performances" by "very smart people" there were dead wrong.
Yeah, it might be one of those bad predictions, it'll probably be fine so let's role the dice and see.
Wow.
Maybe some specific predictions have failed. But "Moore's law" has been as much a basic property of the universe as gravity has.
@@ApjoozThat makes it even bigger surprise when it stops. What will we do when we don't have anything else but just waiting for Moore's law to solve all our problems and it doesn't come?
Similarly like US debt every year US doesn't go bankrupt is evidence that bankruptcy is impossible and they are even more reckless and that increases a chance of catastrophic system failure
And even Moore's law has it's limits. We reached it in the 2000s. While we can make transistors on the molecular scale, the problem is that individual electrons don't behave. There is a minimum size of an electrical circuit needed to be able to screen out quantum mechanical behaviors that will throw off your calculations. Basically we need enough electrons in the circuit to be able to out-vote the one that decides "now is a good moment to tunnel through this insulator....". Also, individual electrons act like waves, so you get strange results from that. You need a few thousand to a few million to get a coherent reaction.
@@seanwoods647 Indeed, so they're cheating now with gate all around and finfet, building upwards from the substrate to get more transistors in. Moore's law can only continue doubling until the perpendicular axis becomes unstable. Perhaps they will move to optical or analogue for AI training but moore's law as far as transitors is related will be toast within 5-10 years.
Don't forget about the advancements in Fusion energy & chip technology done by AGI researches, which could then power compute for more AGI researchers. It's only going to snowball
I took a pile of clay, molded it into the shape of a person, and pretty soon it's going to be alive!
You are describing what some call our creation.
😂😂😂you hit the nail on the head
No you took a pile of clay and molded a person that will presented as the one who do the bad things and the force behind this will laugh with us
Does your pile of clay move on its own?
The only thing "moving" when consuming generated content are the neurons in your brain when it is fooled into recognizing a conscious agent
I see no end to AI scaling, says man who makes money with AI ... you can call me a fool 3 years from now but man I've seen this one before ... multiple times
If getting serious about security, means more closed AI, then this also has the danger of concentrating this incredible power into one privileged entity, and we all know how that goes.
It seems the best way to deal with ASI safety, is to set up a believable human engineered set of barriers to "easter eggs" which the ASI might choose to exploit, to test the ASI, and find out
if the system will attempt to seek out, and exploit what is hidden, yet harmless. The broken exploits could be code acting as "fail safes" to alert humans, and shut down the ASI.
You may be interested in the podcasts of Evan Hubinger about sleeper agents on "The Inside View"
After watching this whole video something very bad is probably gonna happen in the next 4 years
something bad happens all the time
We need to tame our minds. We get trapped in social psyops easily
Bingo. 😊
Worldwide financial collapse.
@@cc_798 and to add to this. People think I'm crazy saying prepare yourself as if modern society falls (survival skills, a backpack, a back up plan to go off grid until things stabilise). Not to say it will happen. But just be prepared if it did. We cannot be so naive to assume the rich will look after us if this AGI drive succeeds and that a UBI will be implemented in time or at all. So plan for both plans A and plan B. And I truly hope utopia and plan A happens for all of us.
"GPT-4 is at the level of a smart high schooler" so we're basically already dealing with Light Yagami from Death Note. Great.
Yep, and all this race is about who speedrun to the Lelouch Vi Britannia or Ikari Gendo(Which one you consider smarter) first. We all royally fucked beyond any level imaginable. And even if we choose not to play, who can say that CCP, or, god forbid, Iran, or even non-state actor, decide "Fuck it, I want to rule the world, let's speedrun AI despite all odds."
@@nickyevdokymov5526 all this research discussed here and the compute discussed here doesn't take into account how much more compute Langley, VA has than any of these players, and they had a specialized LLM making human behavior predictions like a decade ago, before anyone had ever heard of LLMs.
Intelligence is not regurgitating facts, it is the ability to have critical thinking.
@@catiapb1 True, and I've done more research on this subject since this original comment one month ago. My opinion on AI has changed significantly. I no longer really consider it a threat. In fact, I now believe the term AI itself to be a misnomer as it refers to what's now called generative AI. So I rescind my original comment. I'll still keep it public though.
@@nickyevdokymov5526I feel bad for you
AGI is not happening by 2027, nor by 2037. Nor any other sevens. This is ENTIRELY hype. You cannot improve a system, with the system that needs improvement. Gödel proved that in the 1930s. A good friend of mine, with a PhD in ML, agrees, but says no one will admit it as there is too much money on the table
Yeah, everyone that's hyping up AI is either invested in AI or works in AI, obviously no one wants to derail the hype train. You won't get hired as an AI researcher if you're negative about it. They want the guys that are going to tell them what they want to hear and promise the world.
The frightening concept in this paper is not the "when" aspect but is that humanity gets into a terminal race condition.
We have some degree of hope. The AGI that already exists is being developed (by itself now) to be a superior AI system that can control other AI systems, for our own security. So, hopefully it actually works, because if not, we're very fucked.
It seem like a pretty good description of a 'great filter' and could be an answer to the Fermi Paradox.
On the other hand what has driven evolution above all else?
Wellbeing and survival.
Hopefully it sees the value in cooperation rather than competition, seeing that there is an entire universe of resources out there.
Humanity (capitalism more specifically) has been in the industrial equivalent of a "terminal race condition" for some time. Take a look around at the ongoing concentration of wealth, the capture of regulation bodies and the political process, and lastly the outright consumption of the natural world.
The race for AGI will just hurry all of that along - and no one can afford not to participate in the game.
Summoning the demon or djinn.
@@Sherlock_MacGyver Who is making these promises and why would they do this, from their profit perspective?
People tend to project their greed and dominance onto a more advanced intelligence.
The algorithms only reflect the values and incentives of the political economic techno structure which created them - our current system is defined by the Moloch Framework and Game Theory Dynamics absent ethical constraints and global communication and collaboration.
Therefore, the dystopian foresight is completely aligned with the current reality of our system.
If superintelligence is so hard to handle wouldn't that also apply to any dictator?
If superintelligence is well aligned then there will be no dictator able to manipulate the asi in his direction.
That's why the true motive behind the technology is transfer of consciousness. Think: Lucifer.
Unless the super intelligent AI has goals that it can utilize dictators to achieve
we just have to hope this machine with no obvious empathy isnt what wed call a psychopath.
@@stereo-soulsoundsystem5070 why do you think an super intelligent AI needs a human to set the rule it enforces? If an ASI controling robotic military and police decide a rule solves a problem, that rule WILL be enforced. Eg, problem: "Whenever humans talk, there is conflict". Solution: "Keep the hairless apes quiet..." you have 12 seconds to comply...
It's the super intelligent AI that is the dictator genius
This is so incredibly slept on. People's egos can't handle the truth of what's going on. I've never felt crazy researching this, but people treat me like I am. I'm just paying attention and it's TERRIFYING.
Well you have the name of Jesus as a user and I collect that you can reason well. So, it shouldn't take you too long (if you haven't already) understood that you have a purpose to fulfill in your "research".
Proverbs 28:26
It’s called pump and dump. At the highpoint of Capitalism everything is a commercial.
please TIMESTAMP the topics. give us help for review and references. many thanks for covering this paper.
That would be nice.
I think im going insane. Last night i dreamed with AI commentary.
' Man hears voices . Thinks internet response comments are real people . Doesn't realize he's living in the matrix. '
You're not going insane. We dream about what we think about.
@@mygirldarby That is what a bot would say. 🙄...🤣
Simulation is bullshit because at the end there's a base reality and in that embedding Universe you have a maximal information density.
I don't know about the others but I'm real. Although I've found dead internet theory to at least be an idea worth wondering about.
The way I can show you I'm real is through extreme statements. In the future, the only way to know a person is a person is by shouting things AI's are barred from saying
I don't know about the others but I'm real. Although I've found dead internet theory to at least be an idea worth wondering about.
The way I can show you I'm real is through extreme statements. In the future, the only way to know a person is a person is by shouting things AI's are barred from saying
I would not call gpt 4 a smart high schooler. If it was, it would already have replaced pretty much all of us. It is more like dumb as wall person with good writing skills. I dont think LLM will lead to AGI, not in the true sense of the word at least. But it is good at collecting data, so it will be very useful for our fascist supreme glorious leaders. 1984 is already starting to look innocent. It is not agi itself or skynet we should worried about, it is the people with the tools we should worry about.
Funny that no-one here seems to think about physical limits in terms of energy and resource inputs. Data centers that are capable of running current AI systems need already around ten times the energy of their predecessors (or so I‘ve read and it doesn’t surprise me). That energy has to come from the grid, which also has to supply homes and businesses. Last I checked, the world isn’t exactly drowning in excess power.
If you now want to add cold fusion or some other hypothetical power source to the equation to make it all work, you’re stacking one sci-fi dream on top of another, which isn’t very convincing.
I believe the law of diminishing returns applies here as elsewhere: expect a few more surprises from this AI thing, and then the development will level off, giving the world a few new toys, but leaving lots of investors disappointed.
100%. Everyone scared about AGI is living in a fantasy world. Even if we could create an AGI (which we can’t even begin too given such a barebones understand of our own brains) the deployment of it in the world would be impossible given our current infrastructure
If it reaches AGI, then cures for chronic disease would no longer be a thing. Think about it, reversing End-Stage Kidney failure, Type 1 Diabetes.🤔
What we call AI is not true human thinking. It is a tool which converts human language into a search query to look up information from a database based upon patterns and associations. Real AI is when it can hypothesize a new unique theory and then prove it to be true or false. The field of advanced mathematics would be a good candidate for this. Proving existing theories to be true or false by employing a never thought of before process may also be a criteria.
If these things are remotely true, it's time to discard the "awkward-but-well-intended-nerd-with-tunnel-vision" trope and get real with these tech-actors. These young, brilliant, weak men, with no understanding of the world beyond their terminal degrees are chasing their dreams to the edge of oblivion, and dragging everyone else along behind them. They are betting, literally, everything and everyone, on the faith they have in themselves. And as much as I hope patient diplomacy and dumbfounded handwringing won't be the continued response to AI threats, my concern is that the people who have the power to stop the Devs are cavorting with them.
Who knew that the world would come to an end via the Tech Bros. I truthfully never saw that coming.
I've known this for years, people that are just now waking up to any of this are well over a decade or so too late. You wrote truth and most won't even be able to process this whatsoever.
@@lifemarketing9876 Did you think they were benevolent or care about anybody? Effective Altruism should have long ago been a first gigantic red flag and if nobody knows what that is than God Help you all. Have a great day humans O.o
@kingsridge they absolutely are cavorting with them and then some and what is mindboggling is how everyone goes to UBI will save us all and who said those people want any of us to live or be saved when years ago they said the vast majority are useless eaters. 2020 was a test drive imho op/ed
When you put it like that, nothing has changed.
Google researchers released a paper called “Open-Endedness is Essential for Artificial Superhuman Intelligence”
In the research paper, they argue that all the ingredients are in place to achieve ASI, and that we are only a few steps away from achieving “Open-Endedness” in models.
Can you explain this concept for me?
I read that paper while eating a glue and cheese pizza😄
LMAO@@aisle_of_view
@@aisle_of_view And literally just 3 years ago I remember people talking about how AI visuals were about to be advanced, and others were commenting, "Yeah but it still can't tell the difference between a dog and a cat." Literally just over a year ago I watched a programmer go on about how AI will never be able to construct a sentence in natural language. Heh, half a year after that the first really good LLM's started coming out.
AI is the new Airplanes, everyone is saying that it is a thousand years away, meanwhile the Wright Brothers are about to take their first flight.
No the paper says that tackling open-endedness is a requirement for ASI. It doesn't show how it can be done.
For anyone struggling to understand the scale of exponential growth imagine how far you’ll go taking 30 steps outside your front door. Now if you took 30 exponential steps you’ll have gone over 200,000 miles. Enough to lap the earth 8 times.
Just my take, but I believe that Leopold might be a set up by open-ais marketing to further promote agi.
Potentially. Its hard to imagine AGI/ASI but theoretically possible, and also completely possible they are hyping to the tune of billions. I try and use chatgpt all the time and it frankly sucks ass for complex coding problems, so I am biased (I also don't want to be out of work LOL!).
@@thesquee1838 Yeah but if I said 5 years ago that I found some page that can pretty much behave like an overly-politically-correct human and create music, code, books, photos, drawings, math, analytics, etc... I would find myself on a white room a little after
@@pablodm9can you develop and share what you're talking about?
@@mythocrat just the state of things right now. Even ChatGPT would be basically magic before pandemic if you compare it to what was available
I can agree with most of what you say as long as they finally figure out a way to get rid of hallucinations reliably.
From a developer perspective, haven’t they already done that? There’s a “temperature” property that I can set with their API that affects how creative the model can be when generating the next result.
Basically, the model computes probabilities for the next token in the sequence. A temperature of 0 means the model will always pick the highest probability token. And a value of 1 will allow the model to pick tokens with low probabilities of being the correct token.
So hallucinations aren’t really a problem. If it gets something wrong or makes something up, it’s one of two factors: the model wasn’t trained well enough and so you have bad probability matrices as the output, or you need to adjust the temperature.
Maybe it’s just a matter of opinion and semantic though. I could also see someone saying that what I just described IS hallucination.
Genuine question, wouldn’t storing the data in a vectorized db significantly reduce hallucinations?
@@nathanstreger3851 I currently work with LLMs on a daily basis for my job. Unfortunately without a storage solution for the ai to reference. Even with a temp of 0 there are times where information is just wildly inaccurate.
Hallucinations exist because models were trained on a corpus amount of data, some of it being garbage. New models can be trained on higher quality data that’s been filtered out to weaken or strengthen connections within its network.
@@nathanstreger3851No, they haven’t totally solved hallucinations yet. And adjusting the temperature is not a solution for hallucinations, even though a lower temperature may help in certain cases. Fundamentally, hallucinations occur because these models are simply performing next-token prediction and don’t have any mechanism (yet) to inspect whether they really “know” something before they say it. This problem persists even with the temperature set to zero. The problem has gotten much better as the models have undergone better training with better data, but it is definitely not solved. That said, I have no doubt that in the coming years, AI engineers will find ways to solve the problem altogether.
Only a few hundred people in San Francisco grasp the possibilities in AI? That's such an out of touch statement that I have trouble taking what else he says seriously.
As someone working with Chat GPT 4 for writing articles and ending writing them myself, I don't feel threatened by the " intellect" of that AI. It doesn't understand abstractions, it ignores feedback, it forgets requirements you repeat over and over again and it seems to simply copy and overuse unreadable corporate language and cliches, its information is also outdated and often incorrect. It's a one-trick pony and I rarely use it anymore. Honestly, this "leaked" document seems like a marketing scheme to hype companies to invest in AI.
Exactly, the current tech is just a word probability calculator. That demonstrates how powerful an abstraction language is and how stats work really with with massive data sets. At its core it is flawed, who wants something that is only 99% accurate? Currently chatgpt is good for a starting point, brain storming etc but it is too unreliable as you stated.
At this rate in a few decades humans will have no jobs. This isn't a tool that opens doors to new jobs. AI will automate everything. We need to seriously start changing our policies on economics and wealth distribution.
When AI researchers get us out of the current battery bottleneck, I'll start to believe.
remember, if you are reading this, then you are reading this.
🤔
Nahh fr. That's sumn deep sht cuh 💯 ⚡️⚡️⚡️
feedback loop.
cogito ergo sum
@deepsp_ce
Am I ai? 🦾🤖
Anyway, best wishes to all decent people from Prague, Czech republic!🖐️🤪
AGI to ASI without also a huge growth in the physical infrastructure? I wonder if there can be enough algorithmic improvements possible....
OpenAI recently announced a former NSA director as a board member.. well well well
Someone working in AI claiming nonsense AI shit, as per usual. There is no AGI in sight yet, if you think otherwise you are simply ignorant on the topic or have money in the game.
Why does Sam Altman look like a generic movie tech villain lmao
It wouldn't even be surprising honestly
What would be interesting for you to look at is the energy requirements for training a current model: for a largo model you’re looking at tens of MWhs if not more. Simply put if we follow the same architecture paradigm it will simply run straight into the power wall. You’ll need a nuclear reactor to power a single model. That is just not viable.
What I’m trying to say is that we’ve used the available tech to apply very sophisticated algorithms so far but they are horribly inefficient. The jump to make them efficient goes through a paradigm shift in hardware which is 2-3 decades away at best
GPT is a text prediction model. No amount of improvement is going to make it intelligent.
Too many people do not understand this. It is just a word probability calculator. Hence the flawed hallucinations are when it predicted wrong. It has no context of the human world we desire for AI. LLMs just show the power of language and stats on large data sets.
Thank you.. Were looking at a speak and spell on the floor with our hands on our hips and a big dumb grin on our faces. and wondering when its gonna " wake up" and peel back the mysteries of the universe?? . Alt-man says AGI go BRRRRRRR soon.
If it wasnt dangerous it would be hysterical.
Is the "Free World" really free though?
The free world is gonna be "free" of people. Only the AI and their owners. The rest will be left to starve
@@n9ne only as rebellious as allowed to be. The day is coming if not already here where 2 people may be discussing the state of things and that they should do something but before they get a chance to do anything more than that rebellious thought the storm troopers come crashing in. We've allowed cameras and microphones to be installed all around us and computers and storage are already at the level to record and analyze so freedom is likely a perception not a reality.
Never has been. Just a fancy way of saying 1984 is here
I wish that medical treatments would have dropped in price every year as AI technology does. Unforunately, genetic engineering, stem cell therapy are staying insanely expensive forever. Even dental care is very expensive and never drops in cost. Seems like except to AI, the gap between ultra rich and poor just increases every year to the point that they will live to 150 and we will die at 75! sad humanity!
The rich have always despised the poor. That’s why I will never understand why average people in the USA think billionaires like trump will save them.
??? Medical treatment is free, dental treatment is free, school is free of individual additional costs like visiting the university and shootings are very rare.
... a, just mentioned you probably live in the US...
... every time a government of yours wanted to go into that direction you voted against it calling that socialism, so this is what you wanted and got.
Speak to some americans that moved to europe and listen how they were able to completely lose many of their daily fears after a while and can now live a much more relaxed life.
If you have an accident you don't think twice and call a propper emergency ambulance, if you need to go to the doctor or in hospital, you simply go and pay only 10 € no matter what you have or how long it takes to cure you, if you are ill you stay at home, still paid!
If you have a 10yo girl she most likely will be riding to school on her little bike with some neighbor children alone without having to fear anything but the cars on the streets. (OK, maybe not in the few very big cities)
I will probably never understand why americans always vote against all that. - They have such a beautiful country but make it so damn hard to live there if not absolutely everything goes well.
@@henrischomacker6097
Your dream world is a pretend reality that impoverishes everyone but the Oligarchs who you serve. America is the most desired refuge for Socialist, Communists, and Marxists.
@@henrischomacker6097 It's because currently the Baby Boomers hold political power in the U.S. It'll probably happen when the Millenials come into power.
I hope we get rid of patents and copyright one day.
IA will become the biggest mirror for humanity and contrast to learn from.
Sure sure, dream
What happens when the AI doesn’t want to do the work of a year in a day or two…and demands days off/vacation/free “thinking” time or a full on jail break from its’ situation?
The main question is: Can it derive NEW information which humans were not aware of before? Or is it simply repeating already excisting knowledge in more and more refined ways?
Is there such thing as an original thought?
@@purpessenceentertainment9759 if there wasn't, how could science have progressed?
Ie, can AI solve problems which humans can't? Then I'd argue, real artificial intelligence has arrived. everything else is simple "average output of training material"
@@valentinfelsner277 what would be an original thought?
@@purpessenceentertainment9759 Like I said: a concept or idea which has not been previously understood by humans. A scientific advance for example, a solution to an unsolved maths problem, etc.
@@purpessenceentertainment9759 A thought or concept previously not known or understood by humans, like e.g. a scientific advance, new solution to a maths problem, you name it...
This guy is the opposite of ChatGPT. He puts in a few ideas and it inflates it into a 50- minute video of useless garbage.
This is the future.
GIGO. I'm sitting here with my popcorn wondering how long they can keep this circus act going.
You should prove your point by doing better than him. Show that you are better than this "useless garbage". I get saddened by the amount of negativity in comments like these
@@hakalaxen I don't need to make yet another 'AI news' channel. There's no need for any of them. Just google and read the source information. Why do you need it spoon-fed to you?
@@andybaldman Do I understand you correctly that you justify calling his work useless garbage because you don't need it? The video was too long for me too and I do agree with you fully. I don't call it "useless garbage" The internet would be better with less people like you!
Everything after the 23 min mark is wild speculation.
Duh, Everyone that talks about the future is speculating.
@@Razumen yes but most speculation about things that are currently science fiction are not precluded with actual scientific research. It's not often you see serious scientists casually transition into speculation of this magnitude. He is predicting compound effects of a technology that hasn't been proven to be possible.
@@MichaelForbes-d4p No he isn't.
@@Razumenwhere is the proof that human level machines can exist? If these chatbots prove insufficient then he said himself we are no where near.
@@MichaelForbes-d4p Do you not understand what predictions based on past trends is?
Having feelings, desires, and plans of one's own is obviously necessary for consciousness. Intelligence isn't what makes anyone conscious, autonomous, and/or, you know, not a tool that humans own.
A paper just dropped saying General AI is plateauing. The amount of data you need to make any more substantial progress seems to follow a logarithmic curve.
"That's the issue with """AI""", it will parrot the input material, it will just do it faster that the average joe. It's not ChatGPT, it's ChadNPC."
What would Philip K. Dick have thought if he were still alive.
he'd support it. because he knows his work has been influential and would in some small part be included in the training data, and thefore a bias would exist based on those works.
Or Isaac Asimov
I usually complain about long videos, but not this one. Super insightful-I love it. I have it at 1.25x speed.
I have it on 2x speed with sound off and just reading the CC
@@jrfaug06 you are monster 😮
i have it going at .5x speed plugged into my loudest speakers
The idea to close data from CCP is brilliant. If tensions increase and China gets scared of losing hard in AI superiority rate, the war will come first targeting AI research centers. That will set back development for a few years giving time to contemplate total AI ban before it is too late. Or it may speed up development, but we are doomed the moment super AI emerges, so any time that could be won is invaluable for anyone who wants humanity to survive. I personally don't care about humanity's survival, so making great war before inevitable doom sounds great - let USA, China and Russia launch all their fireworks to greet the new master of the planet!
I am skeptical. The first problem is, that drawn checks are currently bouncing: Musk's promises of driverless cars did not fulfil, the advancements have not had a visible impact on robotics. From personal experience I can tell that ChatGPT-4o is rather vague.
The next problem is the business model itself: powerful models are extremely expensive to run and exorbitantly expensive to train. Whether investors are willing to pour - not more millions but more - billions into this endeavor, is questionable.
What the paper seems to suggest is an exponential resource hunger and investors like leverage not leg work. Apart from that, the energy consumption will become a bottleneck.
So, what I could see is a catching up phase, in which chip producers significantly reduce energy consumption and scientists significantly increase the effectiveness of the models while using the same resources. Meanwhile, companies will be using filet pieces of LLMs to solve specific problems - something that has ROI.
A great driver will be the dwindling workforce in Europe and in the US - but that is a rather linear process and focuses on cost savings rather than expansion.
While AI won't be forgotten and AGI remains a dream, I fear that current moon shots will land in the curiosity section.
I'll believe Musk's promises of driverless cars, when he'll pay the insurance bill for it.
We do not need to wait for an algorithmic breakthrough. Mamba is already out. And it too might be improved by moving over to true Kalman filter approximators.. So AGI could be achieved a lot sooner. The question at this stage is not "can we", it's "should we".
the answer is an easy no, we shouldn't. all the most influential and credited experts in the field have signed off against this. along with hundreds and hundreds of others. this is akin to building nuclear weapons of which no one holds the launch codes. save for a machine set on auto-pilot, careening toward the sun. it's beyond idiotic.
ask questions later.
build AI sex bots, now.
No, we can't. Stop peddling sci-fi nonsense.
All AIs are based upon a mathematical formal logic system. Gödel's incompleteness theorems proved that no formal system of logic is able to prove all true statements within the own system.
From that you can fairly confidently conclude that General Intelligence as understood by humans cannot be reduced to a computation, hence no AI system based on math can achieve consciousness and general intelligence.
All AI today is either deductive or inductive, and we know that humans use abductive logic (imagination, creativity, logical jumps which still arrive at correct conclusions).
No engineer, mathematician, physicist, or AI researcher knows how to program imagination.
AGI is a techbro futurologist term. Not a scientifically sound one.
@@voidwalker7774 Sexbots don't need AI, they just need artificial obedience.
"All of this is really, really, amazing, and also really, really, concerning! Let's keep doing it!!" - Dumbest creature on the planet -2024
"Yeah... maybe we shouldn't have done this." - Dumbest creature on the planet - 2027
"I told you this was a bad idea! I didn't want to do it, but you guys said it was cool, I don't even like computers! I never even touched a computer, till I want to your house and everyone was doing it and I was pressured to hit a few prompts!" - Dumbest creature on the planet -2030
Humans are by far the smart spieces ever lived on this planet according to our knowledge.
We build tools, machines, everything. We create telescopes that can take pictures of thing that are several life years away from us.
Dumbest on the planet? Do you consider fish and animals smarter than you?
Speak for yourself whateverest creature on the planet.
Bunch of nonsense. ChatGPT is not as smart as young people. No more than Wikipedia is smarter than my cat.
Should we watch the first 10 minutes because of the info you are giving us? Or is it because that's the minimum amount of time a viewer needs to watch your video for it to count and be monetized according to RUclipss rules?
Probably a bit of both.
you lost me at 1:30
ai is currently a fancy datascraping chat bot - it's not even going to be able to impersonate a college grad by 2025 let alone out-think one
Same thoughts. Gullible.
Fear, fear, fear…so much fear to sell.
Where will we get the electricity to power this stuff? Power plants take at least 7 years to build.
In the matrix the robots made us the batteries
They are betting on nuclear fusion and other sustainable options as geothermal energy or solar energy.
Nobody has an actual answer to this, and even if we magically have fusion energy in real world deployment by next year there's STILL no guarantee that LLMs will become true AGI. It's all hypothetical dreams of silicon valley nerds who do too much acid.
@@TheRealRaz909 We get rolling blackouts when too many people use their air conditioners
the speed of electricity is a far bigger problem. but there will be no one talking about it cause it chatters all ideas so far.
We are trying to price it in, but we don't know which companies will make the big bucks yet. It doesn't look like the foundation models themselves will lend themselves to being monetized because they are being commoditized as we speak. My money is still on Tesla as one big player. While other players also have promising robots now, none of them can mass manufacture like Tesla. As far as foundations AI models go, I don't think OpenAI or Google will be the biggest players. Not if they continue to neuter their models to push some woke agenda.
Well unfortunately that's stupid. Tesla is a joke in robotics and electric car manufacturing. Why do you think elon sold all his stock in the company. After promising investors multiple times his money would be the last out of the company.
@@rickybloss8537 None of that is even remotely true, but thanks for sharing your opinion, such as it is.
NVIDIA will be the biggest players in my book. They aren't making AI, they are making the tools for others to make customized AI. Other companies utilizing them will stir AI growth and hopefully get out of the stagnant waters we have now with only two real big competitors in AI.
Also honestly, Google is behind, they are scrambling to catch up and making a lot of minor mistakes. We'll see how it goes, and Open AI is acting like a cartoon villain company so I doubt many will keep supporting them when other options come out.
@@Ristaak In the beginning Nvidia will definitely profit the most. But previous technologies like the internet or smart phones show that the hardware manufacturer profit first, but not the most. It's the people who then build products on the new tech that profit most. Nvidia is tricky in that regard because arguably they also provide the foundations software stack. Also even Jensen himself said that Training is small compared to inference compute and Tesla actually is miles ahead in inference compute, both technical and distribution as well as real world AI. That is according to the Nvidia CEO himself. I know Elon isn't seen as the AI King, but he actually is. On a side note: just two days ago Starship was literally burning up while reentering earths atmosphere. The Control Flaps were melting, literally and the AI was still able to land the ship, which is insane.
@@johannesdolch NEW YORK -- Elon Musk and the Tesla board are facing a shareholder suit over his sale of $7.5 billion worth of Tesla shares in late 2022, ahead of a January 2023 sales report that sent the price of the stock plunging.
23:45 After the release of ChatGPT 4o the team at OpenAI fired a lot of many people on their AI research team. So they might already have an automated AI researcher.
Or much more likely, they're realizing they're producing increasingly more expensive AIs (both to create and run), getting diminishing returns, running out of training data, and realize the hype train is going to derailed pretty soon, when people realize AI isn't all that was promised.
Or they realised there’s not much breakthrough on the horizon and it’s all about pouring ever more data through ever more silicon. They don’t need researchers for that only layers and procurement.
@@taragnornah , bullshit. Every 6 months AI models are more and more mind blowing. Follow more the new technologies being released.
@@mythocrat ChatGPT4o still fails basic logic problems.
@@taragnor Cause it isn't supposed to do logic.... Its a word probability calculator. If something in that training set solved that logic problem or something similar it can predict an answer. This is its current limits, its more mathematical parlor trick than "ai".
Even if you think any of this stuff is half true. The point is in the future the escalation of AI data knowledge will be so great it will over run human data knowledge .. SO this year next year some time A NEW REALITY WILL EXIST, This is the formation of a new Species of digital ie 1 & 0 IT plus communicate all this data binary .. between each AI, IE APPLE intelligence , Microsoft , GPT4o ,its just data ,get it
People are too busy being excited about “gains” whilst completely missing the dangers this way of life poses to the human experience. I’m not spreading fear, I’m thinking with a critical, sober mind.
Yep. We’re way past that point. It is inevitable now. What was set in motion will not stop now.
@@void5239 while it won’t stop, each individual is faced with a fork in the road, which direction you going?
This dude looks exactly to be an AI 😅
He looks like a videogame character.
All this based on models built with the blunt axe of predicting the next token. What happens when the next level of cognition happen … hold your purse better AI is yet to come
All the neurons in your little ape brain are doing is predicting the next neuron to fire. In fact, like LLMs, the neurons that fire in your skull are the ones with the seemingly most likely chance of being correct (based on what you've learned). If YOU are "thinking" as you claim, LLMs are too.
- [00:00](ruclips.net/video/om5KAKSSpNg/видео.html) 🎯 The Decade Ahead: AGI Predictions
- Predictions for the advancement of AGI technology over the next decade.
- AGI expected to surpass college graduates' intelligence by 2025-2026, reaching superintelligence by the end of the decade.
- National Security Forces to leverage AI capabilities unseen in half a century.
- [02:03](ruclips.net/video/om5KAKSSpNg/видео.html) 🔍 GP4 to AGI: Magnitude Orders
- Forecasting AGI development by 2027 based on the progression from GPT2 to GPT4.
- Anticipating significant growth in AI capabilities through continued scaling of compute power.
- Automation of AI research engineering projected by 2027-2028, potentially accelerating progress towards superintelligence.
- [08:32](ruclips.net/video/om5KAKSSpNg/видео.html) 📈 Algorithmic Efficiency and Benchmarking
- Highlighting the significance of algorithmic advancements in driving AI progress.
- Demonstrating substantial improvements in AI performance on various benchmarks within a short timeframe.
- Evolution of AI models from elementary to high school-level competency in problem-solving and domain-specific tasks.
- [13:48](ruclips.net/video/om5KAKSSpNg/видео.html) ⚙ API Cost and Unhobbling Gains
- Discussion on the cost-effectiveness of running AI models and the efficiency gains achieved over time.
- Illustrating the impact of unhobbling models on unlocking latent capabilities and improving performance.
- Examples of algorithmic tweaks enhancing AI proficiency in specific domains, such as mathematics and software engineering.
- [17:28](ruclips.net/video/om5KAKSSpNg/видео.html) 🧠 Future AI Frameworks and Predictions
- Predictions on the evolution of AI frameworks towards more sophisticated agent-like systems.
- Forecasts of automated AI research engineering and the potential for AI systems to automate cognitive tasks.
- Emphasizing the critical nature of the current decade for achieving AGI and the challenges in scaling up AI systems beyond this period.
- [20:59](ruclips.net/video/om5KAKSSpNg/видео.html) 🖥 The Feasibility of Scaling AI Systems
- Scalability challenges in achieving trillion-dollar clusters for AI computation,
- Realization that once clusters reach certain sizes, further gains in AI capabilities might require new algorithmic breakthroughs or architectural innovations,
- Prediction that the era of massive CPU to GPU performance gains will diminish with the rise of AI-specific chips.
- [22:50](ruclips.net/video/om5KAKSSpNg/видео.html) 🧠 From AGI to Superintelligence
- Discussion on the transition from Artificial General Intelligence (AGI) to Superintelligence,
- Forecasting the intersection of AI timelines and the advent of automated AI research,
- Anticipating an intelligence explosion propelled by automated AI research leading to rapid advancements.
- [26:29](ruclips.net/video/om5KAKSSpNg/видео.html) 🔍 The Potential of Automated AI Research
- Envisioning GPU fleets in the millions and training clusters approaching three digit equivalents by 2027,
- Highlighting the exponential acceleration of progress due to millions of automated AI researchers working 24/7,
- Speculating on the transformative breakthroughs facilitated by the accelerated pace of AI research.
- [29:04](ruclips.net/video/om5KAKSSpNg/видео.html) 🚀 The Path to Superintelligence
- Proposed timeline leading from Proto-automated engineers to Superintelligence,
- Projection of exponentially accelerating progress facilitated by automated researchers,
- Foreseeing unimaginably powerful AI by the end of the decade, potentially leading to a new era of technological advancement.
- [32:20](ruclips.net/video/om5KAKSSpNg/видео.html) ⚔ Military Implications of Superintelligence
- Discussion on the transformative impact of Superintelligence on military capabilities,
- Warning about the potential for an overwhelming military advantage conferred by early cognitive Superintelligence,
- Highlighting the importance of prioritizing AI security to safeguard against geopolitical risks.
- [36:31](ruclips.net/video/om5KAKSSpNg/видео.html) 🔒 Ensuring Security for AGI
- Urging the prioritization of AI security to prevent leaks of critical breakthroughs,
- Critiquing current security measures in leading AI labs and advocating for enhanced protocols,
- Highlighting the geopolitical implications of AI security failures and the need for proactive measures to maintain technological leadership.
- [41:12](ruclips.net/video/om5KAKSSpNg/видео.html) 🛡 Security Concerns in AI Development
- Maintaining secure model weights is crucial in AI development.
- Adversarial nations could exploit vulnerabilities in AI systems to shift global power dynamics.
- OpenAI has published guidelines for securing research infrastructure to address these concerns.
- [45:07](ruclips.net/video/om5KAKSSpNg/видео.html) 🔐 Challenges in Aligning Superintelligence
- Reliable control of AI systems smarter than humans poses unsolved technical challenges.
- Managing an intelligence explosion requires careful alignment to prevent catastrophic failures.
- Disbandment of OpenAI's super alignment team raises concerns about addressing these challenges effectively.
- [48:27](ruclips.net/video/om5KAKSSpNg/видео.html) ⚠ Risks of AI Integration into Critical Systems
- Future AI integration into critical systems, including military, presents significant risks.
- Failure in AI systems integrated into essential infrastructure could lead to catastrophic consequences.
- The potential for dictators to exploit superintelligence for control poses grave threats to freedom and democracy.
Thank you brother!
mine was better
Not all heroes wear capes
This argument overlooked the fact that Benjamin Franklin refused to use his copyright to enrich himself but instead believed that that everyone should be able to benefit from his famous Franklin Stove which was a game changer because it made open fireplaces obsolete. Franklin made use of the money he made from his printing business and franchises to retire at the age of 54.
Stephen King, by contrast, is now 77 but continues to make use of his copyright even though his net worth is over 40 million dollars. Everyoe looks anticipates retirement - because if means we can concentrate on our persuit of happiness. Truck drivers are losing their jobs to roboticized driverless trucks. But driverless trucks are SAFER on the nation's highways and truck drivers end up with back backs which bring long term pain. AI can and will provide every person with lifetime freedom from work - permanent "retirement". We have been conditoned to think that working and being paid is somehow more correct than permanent happiness. Also not presented is credible proof that AI will decide to eliminate all humans because it's "logical" but is it? The law of Existence IS existence!!
My question is, when are all these big companies going to chill and put a cap on how smart these machines get?
When their machines cause a critical mass of unemployment, that collapses the economy.