Machine can't achieved AGI because u need human live intelligence an government id to crack agi 🦾🌍 then agi will be achieved it will be not machine it will be human holding agi system in his mobile an he will call super intelligence human being 🦾🌍
It's easy to be cynical at such a definition but the point is when it can actually do work AND generate _profits_ (not just revenue) of such magnitude then it's pretty much guaranteed to be AGI. Like, the world isn't going to spend $100 billion on chatbots or image generators in any relevant timescale. But I agree that it is a silly way of defining such a thing. It's like someone in the late 1800s defining a "family car" as a motorised vehicle that generates $1 million in sales.
The purpose of the definition seems solely to finalize the details of the extant Microsoft/OpenAI deal. It sets a cap on the possible returns to Microsoft around $40 billion after recovery of its $13 billion investment.
@@CatsAreRubbish no, not in the slightest. LLM's and related technology actually become pretty useful even before they reach AGI. But it is highly unlikely they will reach AGI anytime soon.
And since the more you know, the more you don't know, the AGI should be saying "I don't know, but approximately..." a LOT.
5 дней назад+36
Instead, it aims to be the best used Ford truck salesperson it can be while waxing virtue signals about how it can't offer some answers for "ethical" reasons.
I think the people are starting to loose that ability themselves, thus pseudo AI becomes more humanlike without even trying. We are starting to think in headlines and clickbaity announcements, instead actual of content-based thoughts. The more we're afraid on the inside, the more self-assurance we project. We are always sure of things, we always expect a most optimal outcome or even a miraculous one, we always try to say that will give us the best reaction in the listener. Delivering the answer that just satisfies the one asking the question is just a step away from what LLMs do.
I have a rather concrete idea of what I would call an AGI, personally I think that when we make a program that is fully autonomous that can be given a problem then it finds data on the problem then prioritizes which information it will take from that data to learn from and to try and apply it to the problem and learn from the attempted application do that recursively it will at least be very close to what I would call an AGI.
That's because we're really careful about the concept of "intelligence". AI merely needs to reach a crazy level of proficiency and a resulting perceived consciousness for us to wonder whether it has become a digital being. One step further and we'll have little reason to question its consciousness.
@@captainobvious8037 I think it has as much to do with autonomy than problem solving. for example if i just let this new agi run and have a bunch of data streaming at it (like what a human experiences) will it do something like ask a question and wait for a response and then try to do things on its own if i dont answer or do absolutely nothing? I dont think weve even started to measure an AI's ability to take productive initiative which is a huge part of what we actually want from AI yet i never hear anyone ask that as its a key attribute of what makes a human worker valuable and is not 1:1 correlated with intelligence either.
THIS! Before AI, we need NI. We need to know what makes intelligence in the first place. Even if we should not be able to replicate or change it in-vitro or inside the human brain that would be fine. Same as fusion power basically: We cannot make it (aside from using nukes), but we know how it works and have known so for almost 100 years now. We are far, far away from this point of knowledge when it comes to NI
Yan LeCun is famous for saying "If you want to work towards creating human-like intelligence, don't work on LLMs." He's working on architectures called JEPA (Joint Embedding Predictive Architecture). These can learn without language, just as most animals do. They create models to make sense of data and constantly update these models as they recieve new data to improve their usefulness/accuracy. This is much like humans and other animals and is probably at the root of our general intelligence. Early days yet but results so far are very encouraging. I think he's on the right road.
Ultimately, self-awareness requires feedback mechanisms. That's why the internet will never become aware Creativity is just modeling with limited resources,( if you know everything). Developing a reward system as motivation, like Dopamine in physiology, may prove to be the most difficult concept to apply.
Many people seem to overlook that human intelligence developed organically over millions of years. Starting from microbial life, evolution has gradually woven a highly complex intelligence within us. It's conceivable that a quantum computer might simulate such complexity more quickly, potentially creating something that more closely resembles true conscious intelligence. Children learn and make sense of the world by interacting with their environment, often through trial and error. They might try to eat inedible objects or fall numerous times before mastering the ability to walk. This process of failure and adjustment is crucial to developing competence, and a child's brain is in a constant state of growth and adaptation. Expecting Large Language Models (LLMs) to become sentient is, in my view, akin to expecting an artificially created eyeball in a lab to see of its own volition. LLMs are tools designed to perform specific tasks based on the data they are trained on. Their purpose is limited to those tasks, and they lack the intrinsic will or drive that characterizes human intelligence. The data they're trained on, primarily written and verbal language, is inherently restrictive. For LLMs to achieve anything resembling true sentience, they would need to incorporate a broader range of sensory inputs, including visual, tactile, and possibly even imperceptible senses.
I personally find the comparison of a ten-year-old who can fill the dishwasher a lousy example. The ten-year-old had 10 years of training in understanding the physical realities of the world and his own body. Granted, humans do not need as much training data and are much more efficient with that. But we can train our brains in the world because of our bodies. On the other hand, computers are totally detached from that reality unless trained in digital twin virtual worlds.
Yup. LeCun is a contrarian who will always move the goalposts in order to be "right". Marcus is basically a troll at this point. I'm not saying LLMs *will* achieve AGI (much less ASI), but these two are just as bad as the folks who claim to know that AGI is coming next week.
wrong ! it is a good example, coz ai doesn't have to worry about stink, or walking around debris or junk, or dirt, doesn't have to worry about hygene.. and besides! what does it ever have to worry to begin with? a thing cannot be as human or better unless it is subject to what we are subject to .. and sabine! if you blanket like subscriber posts irrelevant of the content, well...
Yup, and this is why the new hotness is AI agent employees. I suspect they are mostly hype for now, because most jobs have nuances that are hard to capture and require ineffable things like "common sense". But there definitely will be impact in the next few years.
This thought process is why multimillion dollar businesses are being ran into the ground. Tech illiterate execs just asking ai what to do and thinking it has any reasoning capability at all.
It only seems that way on the surface because its always highly confident in its answers. Be an expert in a field, and ask it specific questions about that field and you'll see LLMs fall on its face. It's still amazing, but not as amazing as a human.
Ask chatGPT how long is a piece of string (to 3 decimal places). It doesn't pace out the distance across a space, it doesn't chuff with effort, it doesn't have skin in the game.
Altman saying "we know how to get to AGI so we're going to start focusing more on that instead of our current products" means "We want you to think our current products hitting a wall is because of neglect, not a limit of the technology"
What wall? Did you read TTT by MIT (Nov 2024)? Haven't you seen the results of that yet? There is no wall, synthetic data is the new s curve till we hit ASI.
this is dishonest, because sam altman has publicly stated the limitation of previous scaling methods. "Altman suggested that the research strategy that birthed ChatGPT is "played out" and that future advancements in artificial intelligence will require new ideas"
A lot of white collar work is more concerned with process rather than actually engaging in novel problem solving, so although I don't think something like O3 will replace white collar workers, I think it will massively reduce the number of white collar workers needed for a particular business function. Work in a team of 5 people, in a year or two, that team will be 2 or 3 people. Remember, the number 1 cost for most organisations is labour and they have every incentive to cut costs.
@@satchillananda not really. AI can quickly generate a code snippet, but if there is a bug, or if it doesn't fulfil the business requirement, AI is not going to help much. so you have to start debugging the code generated by it. so now suddenly your team will start working on fixing bugs, and addressing business requirements. so again you need same or more number of employees. this is however only trust for small to mid level businesses, large organization might need more employees as they have complex code bases. so I predict, small and mid level size companies will save a lot of money, but they won't hire more people, and they will end up joining large organizations.
I suppose it depends on the nature of the business, but in my experience with working on projects for customers who are other businesses, the novelty level is always high, meaning that the customer either wants something which hasn't yet existed in our software, and/or it will require troubleshooting a novel situation. The reason for that is that the pace of change in various apps used by our customers is incredibly fast, with each upgrade yielding different results, interfaces, etc. Believe me, sometimes I wish the 'magic of AGI' could be deployed to easily resolve some of the more intractable issues we have with the LLM being used for our product. My only concern is that naive C-Suite and the Managerial class aren't knowledgeable enough to understand these limitations(or some will know but not care).
That is an illusion. Ever worked in a big company where part of the work such as automated placing of complex orders (this is not something like ordering something online or in a shop, but in the sense of each step in the order steers one or more teams to perform tasks at the correct and most efficient timing). Forget that AI or AGI can do this at the moment. Example: Client asks for a telecom line connecing point A to B, both located in different countries. Go and dig up some information on what effort is involved in this. That is white collar jobs. They wont be replaced by AGI anytime soon (they may be assisted though). Too many people see white collar jobs as profiles as an accountant or so. That is just a minority though. And still: even an accountant... Try and replace him by AGI that knows all rules and see if your company will still make the same profit after taxes... doubt it since the engine will lack the creativity of a (good) accountant. It will be the same as it has always been for the coming decades: people will get smarter. A job that can only be done by a select group of people due to its complexity right now, may be called a simple job within a couple of years.
Identity confusion like that is actually fairly common in AI models. "Out of 27 AI models these researchers tested, they found that a quarter exhibited identity confusion, which "primarily stems from hallucinations rather than reuse or replication"." Also keep in mind that this model outperforms all other AI models in almost all benchmarks (and completely smokes all in math performance), which is kind of impossible if the model was just a copy.
@aiuifohzosfdh Grok and Claude know they are themselves, not being aware of ones self just means being less self aware imo Everyone talks about ChatGPT, so obviously it's all over the training data, but not quite understanding the context there is still telling imo
@@nyanbrox5418 no Grok and Claude don't know anything, they predict the answer most likely to be maximally satisfying for the user. Deepseek does the same, and if you ask its identity it will say its deepseek. There have been issues tho, due to its testing data including chatgpt outputs on the internet. Many models as proven by scientific research show the same "hallucinations". The fact that deepseek outperforms literally all other published models, makes it obvious that its not just a copy. How can a copy be better?
@@NeoSHNIK i would say then that it needs to ask questions based upon internal prompts not guided by predefined rules: ie it needs to generate a form of "want" for the knowledge not "ive been told to ask questions on this stuff"
@@TheWorldsprayer Yeah until it can think on it's own, it's not AGI to me. It can't rely on prompts to process information; it should process information on it's own constantly/consistently and as one complete mind with access to all its parts at once. It should be able to send me a message when it "wants" and then I'll be open to it being an AGI.
A big missing factor is to continuously update the model according to its interactions. Currently not really feasible to retrain indefinitely but we might very well reach AGI just scaling what we have.
6:00 I think that the act of living is a ton of learning that doesn't usually get mentioned. Living in the world from zero to ten is training. Being social with people and parents is training. Moving from one side of a room to another, experiencing a sunrise, getting burned, is training.
@@eveleynce Exactly; I think people often forget the absurd amount of training that living things go through doing what we very biasedly consider easy.
@@JMSouchakAnd there are currently robots that walk and move way better than any of us do. Robots are achieving super ability in such things currently.
But still human tend to see analogy as fast as after two examples, we can compare everything to everything and without need of thousands examples. AI is uncapable of that
Ignores LCM and tries to cite LLMs as AI. AI is not an LLM. It is a system of components. Anyone trying to denote that an LLM is AGI by itself is ignorant of LLM routers, complex systems, and interconnected networks. We already have AGI, it is just not in the form of an LLM. 2- Ignores LCMs, Ignores federated systems, Ignores TTT, Ignores transfusion models, Ignores facts about the ARC benchmarks were average person scores 67%, Ignores bot-nets possibilities, Ignores many other advancements like google's new paper on regression learning. 3- take it from me-AGI is here, and we're actually working on ASI frameworks now.
If my understanding is correct, to pass arc tests, one needs ttt instead of LLM. If that's correct, Sabine completely misses in her assessment by saying we're hitting some data wall as ttt isn't all about (initial) data.
That depends on the task. You wouldn't need to spend that much to make a robot do the dishes. If we're talking about researching the cure for cancer for 1 month, then maybe you'd pay a human a lot more than that.
Came to the same conclusion. And give me really well made T-Bone steak instead of Mc Donalds and I will astonish you with what I am capable of 🤣. Really, you would be surprised how good motivation can boost the human brain to go above and beyond
You cited Yann LeCun and Gary Marcus, the two biggest AI skeptics in the world right now, both of whom have been proven wrong to an embarrassing degree multiple times over the past few years-all while failing to cite the _dozens_ of leading AI experts who are NOT skeptical and who believe AGI is not far away at all. It feels like you just cherry picked the two guys who confirm the personal belief that you already hold. That’s fine, but you shouldn’t present the video as if it’s an objective analysis of the current situation. Objectively speaking, there are far more leading AI experts who believe we’re quite close to AGI than there are those who believe it’s very far away.
@ First of all, that’s one survey and it’s from 2023. Given the pace of recent AI advances, 2023 is a lifetime ago-go read up on what the leading lights in the field are saying now. Virtually all of them have significantly shortened their timelines. Yann LeCun has _absolutely_ been proven wrong multiple times over the past few years. To give you just one example, he confidently, stated during a podcast interview, that an LLM would never be able to accurately predict what happens to a glass of water on top of a table if you move the table. I repeat-he said an LLM would literally _never_ be able to answer this correctly, because LLMs are trained on text, and therefore don’t have a true understanding of the physical world. Not only was he wrong in saying it would _never_ happen, but he was wrong at the moment he said it-before the podcast had even ended, people were already posting screenshots of them asking GPT-3.5 this question and it nailed it on the first try. So he just speaks with a completely unwarranted degree of confidence, on things he is very often wrong about. Lastly, as to my own supposed lack of understanding-I’m an AI engineer and have been working in this field for nearly 20 years. I have a PhD in computational linguistics, and at my current company the majority of the work I do is on LLMs. I have been published multiple times in peer-reviewed journals on the subject of AI, and have taught college-level courses on machine learning, computer science, and other related topics. So I think it is perhaps you who should “take a look in the mirror”; I guarantee you that I am more knowledgeable on this subject than you are.
And of course the most informative comment would be ignored to oblivion, I just wanted to comment so you'll know that there is a growing crowd or lurkers that agree with everything you've typed, but don't bother arguing on the Internet because people are in denial of their own biases
Wanted to add another comment agreeing with you. Skepticism of AI is trendy right now but it’s not necessarily the case that AI progress will slow down
5:04 As the creator of ARC said, "Calling something like o1 an LLM is about as accurate as calling AlphaGo a convnet". o3 is the alexnet moment for program synthesis. 5:10 "Most humans can solve ARC problems without ever having seen anything like them". That is exactly what o3 did and the way the ARC test was designed. You're recommended to train on the public test set though, because this saves time making sure your model doesn't struggle with the JSON format or understanding what you mean by grids. The important part is the private eval contains novel general reasoning problems that we don't have anything online to train on, so the only way to pass it is to learn the abstract rule at the moment you're presented with it, just like children do. Regardless, passing ARC doesn't mean AGI anyway. It's a necessary but not sufficient stepping stone on the path to it. We probably should consider it a matter of degrees rather than a boolean too. They clearly do some general reasoning now, but it's very simple.
@quantumspark343 Yeah there's a reason the average human baseline isn't ~99% because some of them are hard, but most of them are intended to be "easy to be solved by humans". I don't necessarily mean simple in terms of how hard the tests are either, I mean simple in terms of generality. This is kind of like it can learn to ride a bike by adapting the priors learned from walking to this new domain, so it recognises that dynamic balancing and inertia are applicable in a domain that's a type of exercise, but something much more general might discover quantum mechanics by running classical physics experiments that don't add up, proposing entirely new experiments, coming up with hypotheses that contradict priors learned in other domains, and developing an entire field of research before even establishing anything it can count as ground truth.
Yah. Also I don't know why I watch this Video. It is very clear that this woman is from a different field and does not know to much about the current research
This time I have several arguements to make against the content of this video, the podcast clip with Yann Lecun for starters was from a year ago. He himself has pondered on weather or not reasoning models are enough to take us to the next stage, broadly speaking. Microsoft making that silly claim about AGI is only so they can prolong their partnership with OpenAI. This deal put simply, prevents Microsoft from using OpenAI's products for profit, after AGI is reached. And as such they want to postpone that AGI definition far into the future. I think the video has been phrasing things and interpreting its arguements backwards. It's not that A.I. companies are far from "AGI" so they are going to make more abstract claims about it, it's rather the opposite. That we are getting to the point where it may as well be harder to benchmark a model than it is to benchmark a human, because these models are getting to the point where they can match us at any task. Yeah, LLMs are not the pinacle of intelligence, they work in a very indirect way to directly capture intelligence and intuition... but guess what, a giant monkey can be as dangerous as a man with a gun.
As far as that goes, I think it is already "generally intelligent" in the sense that it can respond to a question better than many professionals. To me the next evolution in it's usefulness will come with some form of memory and persistent context.
Slight correction, the strategy you mention is not just chain of thought. It's also tree of thought, where there are several chains of thought in a tree and it finds the optimal. Chain of thought is a single branch in that tree.
You have to differentiate two things: - The traditional AI learning approach - Test time compute (give models time to think) The first thing might be slowing down. The second does not and will become much more efficient in the next months. There is also a concept called test time learning (fine tuning while thinking) which has even more potential. So no, overall AI is not slowing down and we will continue to see huge breakthroughs in the upcoming month. As for AGI, it obviously just depends on the definition. But I am also pretty sure that almost every definition will be fulfilled in the near future.
It reminds me of when I was a kid (unfortunately, I'm that old) and people talked about the missing link (between humans and apes). Later, that search lost its significance. Part of the conceptual problem was discretizing a continuum. I think something similar is happening now with AGI.
Sabine, I’m sitting here, pants around my ankles, staring at the bathroom wall like it holds the answer to this relentless cycle. Something’s off-IBS, anxiety, OCD, who knows? It’s like my brain’s hardwired to sound the alarm even when the tank’s empty. I get up, I sit down, rinse, repeat, fully aware that it’s pointless, yet here I am, stuck in this absurd ritual. If I didn’t work from home, this would be a disaster-a humiliating, career-ending parade of unnecessary bathroom breaks. Whatever it is, it’s maddening. A quiet, personal war waged in porcelain solitude.
I'm really sorry to hear that. I may just be a random stranger on the internet but I just want to say you're not alone. In fact, I have a friend who also feels an irrational urge to go to the bathroom, albeit at night when it disturbs their sleep. That being said they've gotten a lot better as their mental health has improved thanks in great part to counselling. I hope you also find some help, there is no shame in being cranky, and wish you all the best.
@chalichaligha3234 I appreciate the kind words. It's probably rooted in a quiet, constant anxiety-something that doesn’t really scream on the outside but makes itself known in strange, physical ways. I’ve come to realize, in a very intimate way, that anxiety and depression don’t always show up as feelings. They like to twist the body instead. That said, life’s pretty good right now, and this odd little truth doesn’t interfere much. And by the way, you’re a truly nice person for saying that.
Meanwhile if I ask openAI to draw me a picture of a watch with the hands at 5:38 I will always get a watch with hands at the 10 and 2 positions. The reason is the overwhelming number of pictures of watches out there are shown with the hands at 10 and 2 - it's the most attractive for a watch. That's an indication of overtraining. AI is very dependent on data - all you data scientists out there are probably already feeling it. Garbage in, garbage out. It cannot self correct. That would be AGI to self correct. It cannot do that. AGI will also create it's own data for training purposes. The current level of AI is nowhere near that. It's a *transformative* model. That means it transforms existing data. It cannot create new data.
It hugely depends on compute. I code with it sometimes. At weekends it’s creative, seems to understand much better what I’m asking for and makes few silly errors. At peak times it’s like a drunk junior developer.
GPT is an LLM. It doesn't draw. It just prompts a different AI to draw (DALL-E). They are entirely different AIs. You could prompt DALL-E directly and get better results, probably.
If you tell OpenAI o1 model to draw a picture of a watch with the hands at 5:38, it draws a watch with the hands at 5:38. You are using the wrong model. Try it yourself. It will also write python code or Javascript code or SVG or whatever you want to draw the watch with code. And that isn't even o3. It's like saying mixed mode LLMs can't do math because a couple months ago they could only do graduate level math at 2%. That bumped up to 25% by the end of the month. And that's graduate level, not basic. The good ones are perfect at basic math. Every time you say "they can't.." you will find out either they can and you don't know it or they can in a few months. The new models self correct. AI architectures aren't done, they are always getting better.
That's totally true, but it's crazy how we can still simulate on the fly learning with large context windows. When the AI can "remember" so far back in the conversation, in practice it acts as if it has learned.
@@MrTomyCJ I use LLMs all day with programmings, even with a huge context windows it starts to go off track very quickly and you need to start fresh with it. Custom GPT's and Claude's version also do the same. You can have a 2 million token context window but if you wont get anywhere near using all that as it will be so off subject/task by that time it will be just spitting out nonsense. They test them by feeding a whole book to the LLM in the prompt and asking it a question on what's on page 400 or something, which is fine but when it needs to work out stuff and stay on task or keep a certain format and remember to stay in that format, it soon starts to go wrong. Try a large document in googles notebook llm, that podcast audio can start to say things that you never supplied it, it's quite comical.
That's only true because the models are too large. Eventually with enough pruning and find tuning, you can add training. You don't need to retain every layer. Certain layers would be fixed, while others can be updated via backpropogation in real time.
I don't agree that humans are able to solve ARC problems despite not seeing similar problems. The problems may look new but in reality they will have similarities to many other problems we've seen before.
AI: ANI=limited, AGI=GANI=broader, GAI=human-like, SAI=beyond-human AGI used to mean "like human", but the meaning got twisted so that it can be claimed, so now GAI (expected to become SAI) is what used to be AGI. A = Artificial N = Narrow (limited to specific task) G = General/Generalized (AGI now means not so limited but still unable to generalize beyond its less-limited scope) S = Super (exceeding human) I = Intelligence
Ooooh, how about "Artificial Narrow Super Intelligence, Generalized"? That'd be "ANSIG"! ... I didn't notice we weren't making abbreviations here. Aaagh.
@@brunonikodemski2420 I would not even call it AI (but programming... and yes, I am a programmer, firmware/embedded) "Expert systems became some of the first truly successful forms of artificial intelligence (AI) software." AI for me means "some form of machine-learning", not programming in Lisp/Prolog/Haskell.
I think this video lacks an explanation of the new scaling paradigm OpenAI achieved with the o-series models. They can now scale test time compute on top of pre-training compute. If anything the inmense jump in capabilities o3 shows compared to o1 shows there is no scaling wall yet. They then can use huge amount of inference compute on o3 to create massive amount of reasoning synth data to train a better base model in o4 and the loop starts over. I think you should have posted some of the quotes from o-series reasearch scientists like Noam Brown, they really seem to think this rate of progress will continue at a much faster pace than traditional pre-training scaling. If anything it kinda looks like we will reach ASI in complex tasks like math or coding before reaching an AGI that satisfies every possible thing a human can learn.
In real world applications, O3 is a step back. For real world programming it is absolutely worthless, even worse than ChatGPT4o, which is not that great, too. Standardized tests that the O3 model has been trained against are irrelevant in real life use cases. But the cost of these, at best dimishing returns, is increasing exponentially.
AI is already an overused term that doesn't mean what it is supposed to mean. Therefore, AGI should mean what AI is supposed to mean. Namely, a machine intelligence that has sentience akin to humans.
Wife’s brother doing a quiz today, and couldn’t figure the name of a musician with the letters shuffled. I tried Gemini and Copilot and they were hopeless, names that didn’t contain the same number of letters, hallucinated letters and names. Wife sent the same letters to a friend. He replied saying someone has taken the P, and gave a name which otherwise matched perfectly. Wife’s brother had indeed missed out the P. I think this an illustration of what general intelligence means. Maybe solving cryptic crossword puzzles is a better guideline than maths tests. Ps Copilot response was weird, it said maybe this then oh no it could be this the but then etc, before confidently choosing Stevie Ray Vaughan (the letters were tddrydrnegaess).
This is because LLMs don't know the actual letters in a word, every word is converted to a number for them. It's the same reason they don't know how many Rs are in strawberry.
Tell that to someone in 2018 and they'll tell you that o3 is obviously AGI. The goalposts keep moving. Calling a general-purpose reasoner that can outperform most humans at most short-to-medium-horizon cognitive tasks "narrow" is head-in-sand behavior.
@@41-Haiku Until it's as general as a human mind it's not AGI, in order to be AGI it has to to every cognitive task at least as well as a human no exceptions, as long as there's cognitive tasks humans can do and it just can't it's not AGI.
Congrats, you're grasping the surface level of the discussion. Next you have to realize that some things can't be measured so what matters is the outcome, or what we can observe.
companies like OpenAI have long moved on from "scaling data" to "scaling test time compute", which are very different things. at this point its pretty much obvious to most people that data scaling is very close to if not already have hit its ceiling
@1:24 one can give equally valid reasoning for both blue and the purple: blue: echo the most frequent pattern purple: echo the pattern closest to the right lower corner; echo the pattern containing the square 5 squares to the left, and 3 to up, starting from the low-right corner ..... many (infinite?) rules might be constructed from 3 'source' pattern leading to different 4th
I’ve never thought that any *single* technology will be involved in creating AGI-it will take more than one working in concert. LLMs will probably play a part in it, though.
I use LLM's locally extensively. Follow the latest updates. Hire servers to create service language modules for clients. I can tell you that past a certain point the "heat" and nonsense is overwhelming. In the last 3 years while they have clarified the initial phases, improved user experience (better UI/UX etc.) and brought the cost down of use and/or ownership somewhat. The derailing and entropy and mania that occurs past a certain point is as present today as it was in early 2022. In fact the ease of use and speed increases means that in "time spent on your AI" terms - you are likely to encounter it even more quickly now than you did 3 years ago
Sorry you feel like that, i do understand to a degree, Within its limitations it's very good. My issue is people making claims well above its limitations. (to make money, to inflate share prices) I'm almost 60 and need to keep skilled up to stay relevant for the next 12+ years until I can afford to retire. If that causes you a problem. I'm sorry about that. But AI isn't "coming". It's here. My take was "Get on board, or get left behind". The fact that we are miles away from a general AI is a side-bar to the impact iterative generalised AI is already having. No-one is going to stop now. That would be like the luddites throwing their shoes into the loom 200 years ago.
@@PaulRoneClarke My problem is how its goal is to steal jobs by stealing people's previous output while absolutely demolishing copyright law into smithereens, all the while wasting insane amounts of energy in a planet with an already collapsing environment
4:08 no need to wait till the end of 2025, we have systems than can do certain cognitive tasks better than humans since first calculators were invented.
Hi, I'm working on Open-source AGI and I must say we're heading into the year of trust. Would you trust someone with AGI resources? My own definition: AGI is when it can utilize completely any computer and software. This to develop itself it's own software to grow.
Also the current structure of llms are hallucinations limited by data. LLM will be replaced by a newer architecture that will have a conscious hallucination driver, this to consciously stop an hallucination.
8:04 Yup, and it will most likely be another winter for AI that will last at leas a decade :|. I just HATE this stupid hype that focuses on one certain thing and gives false feeling of completed journey. It slows down the progress as people start to wait for something impossible about overhyped technologies and throw them away after seeing their obviously stupid dreams crush against reality.
There is hype, but most of it comes from small start-ups that have little to offer, that are just trying to cash in. So, "AI hype" is really only a useful term for Wall Street. The leading AI companies are actually pretty realistic in their comments. As for another AI winter, previous AI winters were largely caused by attention and funding turning away from the challenges. There were very few people working on AI and little money being spent. Given the huge increase in the number of people working on AI and the amount of money being spent, there is zero chance of another AI winter. It's just not how the world works.
@@netscroogeDo you think that there will always be billions of dollars spent for AI? And as for 'realistic' predictions of companies i heard from sam a lot that he will make AGI and that it's coming in 2025.
Where is the winter? Scores of articles proclaimed that AI was hitting a wall, and then OpenAI unveiled a reasoning model that smashed a (supposed) AGI benchmark and can solve some novel PHD-level math problems. Where is the wall? Where even is the stutter?
The approach to AGI will be like Zeno's Paradox: the "closer" Silicon Valley gets the farther away the destination will become as the energy requirements for closing the remaining 50% gap go increasingly hyperbolic. The human brain has 86 billion neurons and 100 trillion synaptic connections and can produce insights with the energy of a peanut butter sandwich. AGI is so primitive in comparison it will probably take 10,000 nuclear reactors to do something comparative.
The energy costs are the really interesting part of the problem I agree. I can see them getting 'smarter' in most metric regards than humans - hell, I think in many respects they're already smarter than a lot of people I've talked with in certain regards. But the energy efficiency of a human brain is indeed many orders of magnitude better. The energy requirements for the current generation of AI are completely out to lunch, and as that reality sinks in with the investment community the shine may come off the entire affair with shocking rapidity. Still, I imagine that governments are going to want them for purposes of military planning and social propaganda control, and they'll be willing to pay a great deal for that.
Would be hilarious if this crashes everyone’s economy. Trying to create an human, then old man Warren Buffet and Ghost of munger buy up everything cheap
How many of those hundred million neurons are actually used in the process of generating so-called "general intelligence"? Isn't it possible that, like junk-DNA, they are merely cumulative products of the evolutionary process? Perhaps it is the way specific connections process stimuli that create AGI and not the sheer bulk of connections.
The official definition of AGI, is " An algorithm smart enough to convince Sabine Hoseenfelder, that it is an AGI" With current exponential growth in capabilities we are on track to reach this goal in approximately 300 years
It's simple. We will know AGI is reached if/when it's literally impossible to create new benchmarks where the average (possibly skilled, like a researcher or professional) human isn't able to beat AI by a significant margin anymore. Right now the ARC test is being updated and ChatGPT is projected to have a score of 30%, so we should still be good for some time. Plus the fact there are still many fields where AI is still lacking compared to professionals, like music or 3D modeling. Even 2D images are still not quite there yet, just look at all the AI slop channels that have been popping up recently.
"Literally impossible" is maybe a high bar; it's already pretty hard. And by "some time" do you mean 6-12 months? On the exponential curve we've been following, a year from 30% isn't 45% -- it's more like 70%+.
The new o-series of models unlocked what is called "test time compute", a new scaling paradigm on top of traditional pre-training scaling she mentions in the video. They can spend more time "thinking" (generating more answers, more computation time so more energy used yes) to get better answers. They then can use all those reasoning tokens generated to train the next big base model, at least that is the hope and is possibly how they trained o3.
We keep moving the goalpost. First it was the Turing test, then it was average human intelligence, and now that the LLMs are starting to outperform elite humans, we're probably going to move the goal again. I don't mind, but it is amusing.
💯 humans coping with gradually being outmatched in every field by AI. These things are held to a far higher standard than the average human. Every weakness it has is highlighted while ignoring how it's landsliding humans in most other metrics. Human ego and delusions have no limits.
@@DeathDealerX07 which problem has AI solved that was not solved by humans before? Right none so it is a better search algorithm on the database which issues were already solved by humans. If you definition of intelligence is just copy what has be done before then yes AI is probably underestimated, if you expect to go through issues with own reasoning and thoughts the AGI is still very far away.
@@Techmagus76that's a fair point but also describes perfectly what they're talking about. The shifting goal post. Every time it advances people move the post to diminish the current result. You aren't denying that it's very rapidly outperforming humans in most "mental" tasks. So instead you shift the goal to something along the lines of "well it hasn't created anything new". It's beaten every goal post placed before it so far, the idea that it's going to stop just because is wild. No one knows if it will or won't but being adamant that it's incapable is ridiculous.
@@Techmagus76 What problems have you solved that another human hasn't solved? I'm not trying to be snarky but most economically valuable work doesn't require novel problem solving or creativity, just being able to follow clearly defined business processes is enough for most white collar jobs. Most of us aren't going to be replaced by an AGI, we're going to be replaced by a dumb, inert machine in some distant data centre
I started to learn to drive the first time I got in a car as a child. Observational skills. Same for the dishwasher. How many times did our fathers tell our mothers, "not like that". Humans are learning all the time, we follow patterns, and are predictable as a result. The argument that a 17year old learns to drive in 20hours is useless, as is the 10year old who knows how to stack a dishwasher straight away. A number of teenagers were shown a rotary dial telephone and couldn't work out what it was... yet a chimpanzee did. Do we simply say that all teenagers are stupid and chimpanzees are more intellectual than humans? Of course not.This is why defining AGI is so difficult. Defining intelligence, when people such as LeCun make huge assumptions and assertions that are wholly unfounded solely to create a stage for themselves, is hard. The same people tell us Hydrogen is still the future not EVs, despite EVs now being cheaper to make that ICE cars, and Hydrogen going up in price repeatedly. I wouldn't want to be standing in LeCun's camp, or on the hydrogen or ICE car camp.
Many of these people have been saying intelligence is deeper than just learning a bunch of stuff in a big neutral net for years and can't bring themselves to admit they were wrong about LLMs no matter how good they get.
Great comment. I can't believe that an expert used such bad examples then another scientifically minded person though that they should be included in this video.
I'm doing a MEng in Signal Processing and ML as well as a BS in Mathematics, and I've looked at some researches on joint researches between neuroscience and ML. Here's what I think. A thing here is that current AIs are built upon statistical algorithms, designed by humans according to some mathematical theories invented by humans. The statistical algorithms supposedly represent the ability to "abstract information" and to apply those abstractions to classify 'new' information. How current AI roughly works is that we need to instantiate the statistical algorithm with some chosen default parameters, and then we train the models so that it adjusts the parameters so that we expect "data sufficiently close to an expected input" would yield "data sufficiently close to an expected output"; the algorithm along with the parameters would somewhat represent some category of "abstractions". To clarify on the abstraction, if I feed the image/video of a rainbow-colored dog that's upside-down to the AI, I'd expect the AI to neglect the color, extract the shape of the object, and predict the object is a "dog" or something "close to a dog" unless any pertubation such as some additional information from the picture or video that shows it's unlikely a dog. In case of a video, I'd also expect it to assume that the dog/object in different image frames is the same one regardless of its physical deformation, light variation... as long as if the frames are "sufficiently close to each other wrt to time", the deformation is continuous (yes topology), and so on. This can be a philosophical debate but naturally, I think that's also what most people would assume. So, we can clearly see the limitations as AIs are definitely bounded by algorithms given by humans, and human minds are unlikely bounded by those statistical algorithms, but are very likely bounded by some Mother Nature's algorithms (provided they exist). Unless we can show that those Nature's algorithms exist and that there's a set of man-made algorithms that accurate model nature's algorithms, we can't faithfully say AI really captures human intelligence. Neuroscience hasn't made much breakthrough on this problem. In addition, another major ability that I believe AI lacks is the ability to "change their minds" i.e. changing their codes/algorithms without breaking them. Although it seems possible to mimic it by having two or three neural networks changing each other's codes as well as other networks' codes starting with some initial conditions, there's possibility of one mischanging the other which might kill the whole network. The technical stuffs of those networks are quite complicated. I'm fairly optimistic about the researches in neuroscience and AI and I enjoy the AGI hype as there are many things to uncover, but I think that we might be at least 2 decades away from actual AGI.
The irony of Deep Seek is so wonderful. Chat GPT was trained on basically material (copyright or not) without compensating the authors. The question of plagiarism has always orbited OpenAI and all the AI services. Now DeepSeek comes along. From what I read, it's better than OpenAI, it uses far cheaper hardware. The total cost for DeepSeek is less than $10M. A venture capitalist once said it was "a joke of a budget". And how did Deep Seek achieve such amazing results on a peanut budget? It trained itself on ChatGPT output. Double Irony: Here's a Chinese company in a non-democratic country sticking to "the man", the techno-oligarchy of democratic America.
Language models are one part of AGI, but an important one. Others are strategic and logical thinking, motivation, ability to sense and understand surroundings and affect it through actions. However, even with current LLMs amazing applications can be implemented.
Many of these things can be handled trough language: ability to sense and understand surroundings and affect it through actions Feed the AI an image of it's sorosdings (current LLMs can already proccess images) and let it perform actions by perfoming command, then just run it in a loop feeding it the results of the actions and so on.
When it comes to LLMs, you’re taking patterns encoded with values and statistically sorting them into a synthetic pattern driven by the input language. It’s more or less a straightforward concept. But then there’s reality-it has tons of patterns and values all over the place. And I don’t know about y’all, but AGI to me has always represented something far more capable of working in such an environment, not just something that spews fractal fountains of inferred gibberish. Ahem.
Exactly, we overestimate how much information human language carries, and even overestimate more how much of it is written. Human language is too low resolution to describe the complexity of the universe. It is a tool we created to fix our practical problems, not to draw an accurate picture of universe. It is not just an engineering problem, also a data problem, they absorbed entire information digitally accessible and this is it
@@utkuapeople discuss the dangers of synthetic data-but what is it these models become once they’re baked on a rack? Language itself is reductive, expressive of phenomena intrinsically owned by the self.
@@lobabobloblaw Simulations are also reductions. You cannot even find an accurate physics engine work real-time. Most of them a re just for games, and to be reasonable enough.
Once the LLMs gain access to the sum total of human knowledge, they won’t be able to go any farther until humans discover more to feed it. That’s not AGI.
They have already sucked the entire Internet dry for training GPT 3.5 and GPT4o. The great hope is that somehow, magically, the output of O3 can be reused as synthetic input for generating more data. How that is supposed to work and produce superior outputs from an information entropy point of view is a mystery to me.
@@dominikvonlavante6113 We do it as humans all the time. It's mostly a process of generating random noise and occasionally discovering something useful in the giant pile of informational detritus. It's the ability to recognize when something might be 'useful' that's the most important in this regard.
@@dominikvonlavante6113 It's not magical, and they've been doing that for over a year now. You can create a "textbook" of high-quality synthetic data from one model and train another model on it, and the model will get better, not worse. The key is filtering the synthetic data well, and using the additional examples to reinforce that type of problem solving.
@@dominikvonlavante6113 Eating your own shit does not make you stronger. "They have already sucked the entire Internet dry for training GPT 3.5 and GPT4o. The great hope is that somehow, magically, the output of O3 can be reused as synthetic input for generating more data."
The plan is probably to teach ai to navigate and problem solve by simulating it embodied in virtual environments and then it can be it's own source of "data" when it can then effectively traverse real environments efficiently from simulated experience that and test time compute can help as well as turning bench mark problems into virtual practical problems that the virtual agent can itirate on until imo we already have all the necessariy ingredients to solve AGI, they just have not been combined and then put in the oven yet
AI might be able to do what a human can do a lot faster or even better.. but AI cannot do what human's can do in terms of us being creatures that stabilize the instability inside, hence an unpredictable but intelligent beings.. if some day the AI whatever level it is at starts asking the question "why?" without being programmed to do so, that day i will take it seriously.
Human intelligence is actually a lot more expensive unless its o3 on high. Human intelligence needs a wage (Large one if its especially intelligent and knowledgeable) Ad needs food water housing entertainment etc. Add all of it together and you get a quite the sum.
@stagnant-name5851 What's the point of money anyways when all work is done by machines. Nobody's gotta pay them, except in Energy, which by then is probably Fusion Energy anyways, maintained by, you guessed it, machines. Stuff would just be available to anyone anywhere. Every human could get the best of everything available, because there is no point in making a less good but cheaper product, because nobody would have to afford it, since nobody has to get paid. The transistion is gonna be long and quiet painful. You keep learning doing new Jobs that will be replaced by another AI and another Robot. Then with less and less Jobs, you'd even have to find some sort of Job.
@stagnant-name5851 First of all, companies that sell AI services, like OpenAI, are still far from being profitable. This in itself is a critical hurdle that must be overcome in order to secure the future of AI. But beyond profitability, there are deeper technical and societal aspects to consider. The intelligence of an AI system largely depends on the number of users interacting with it. As more users engage with the system, its overall performance can decline. This is why there's a continuous effort to increase AI's processing power. For AI to operate at the level of an average human while serving millions of users, the energy demands would surpass our current technological capabilities-at least for the next 60 years. In essence, AI is a tool, much like a computer, that helps humans boost productivity. As these tools become more advanced, they pave the way for new jobs and make tasks that were once expensive or difficult much more accessible and affordable. One potential outcome could be a scenario-either utopian or dystopian-where AI is capable of performing everything a human can, making human labor obsolete. In such a world, people might no longer need to work and could focus on pursuing their passions, free from financial constraints. Money could lose its relevance entirely. With AI handling everything from physical tasks to intellectual work, society might shift toward creativity, leisure, and personal fulfillment. However, this vision also brings new challenges and questions about human purpose and the structure of society. The most significant obstacle to creation, however, is energy. This is a fundamental law that governs all of nature. Nature has achieved remarkable efficiency, evolving systems that operate with minimal energy waste. It takes us decades to even come close to this level of efficiency with our current technology. Whether we’re trying to replicate biological precision or achieve higher energy output with fewer resources, we still have a long way to go in closing the gap between human technology and the inherent efficiency of nature’s design. In conclusion, despite all the advancements and hype surrounding artificial intelligence, the reality may be that the cheapest and most efficient form of intelligence remains human intelligence. While AI has incredible potential and can revolutionize productivity, the energy costs, technological limitations, and complexity involved in replicating human-level cognition are immense. Nature, through evolution, has optimized human intelligence over millions of years to operate efficiently with minimal energy consumption. Until we can significantly bridge the gap between human ingenuity and technological efficiency, human intelligence will likely remain the most accessible and cost-effective form of intelligence.
Sabine, I trust you more than 90% of the "science" channels on RUclips. You have scientific skepticism, and use titles that aren't just click bait. The rest are like "Omni" magazine, a "science" magazine that was sensationalistic with very few facts. Keep doing what you are doing! Science isn't about "knowing", it's about asking the right questions and finding out where they lead. Sam Altman is an under-40 guy that talks with "voice fry". NOT reliable. These people never admit that our brains don't function with mathematical algorithms! Has a computer ever independently developed Newton's laws of motion?
I think you mean vocal fry, and I guess you don't trust anyone in Finland since they all use it. Your thinking killed Galileo but ultimately couldn't hide the truth forever. We are not special in the universe.
1 vote no AGI in 2025, but usable LLM and ML stuff - specifically related to key verticals. Example, data science - the current LLMs already have people throwing data sets at them and the market for data science work on upwork has dried up. So too Graphics Design - Customers are already generating their own graphic design work and the field is suffering. Translation is another field that will be affected, as LLMs are great translators of documents - even hand written documents that you only have a blurry picture of. So. Usability yes, but AGI, no.
One who gives a very good explanation about what an AGI is, (one definition) is David Deutsch. He would probably say someting like, a Chess program, however good it is at playing chess, even if it can learn from its mistakes, is not an AGI. It lacks creativity and can hence never create new knowledge, even about chess. It cannot step outside the algoritms it follows. It doesn't know it plays chess and it cannot choose not to. An AGI however would have creativity and could say something like I don't want to play chess, I want to do... something else. Creativity is, I think, a main parameter in an AGI. That doesn't mean an AI isn't vastly better than any human in many areas. If we could write the program we have in our brains into a computer system, it would be an AGI. If we ever create an AGI (far into the future) it will probably be different from the human program, but it will have creativity! Otherwise it just follows instructions. A common misconception is that if an AI gets good enough it will eventually turn into an AGI. That is wrong. It's a completely different program. An AGI is the opostie of following instructions. It's a different kind of program that we as yet have no idea how to make. But it will have creativity! With all respect to other definitions.
AI models are already smarter than most people in most aspects. This should be obvious to anyone who's used an LLM in the last year. The question is: when do they become smart enough to automate AI research? When that happens, well, get ready for everything to change. Don't assume we'll survive that though. I'd prefer a more cautious approach. Let's not build things that can outsmart humans.
Imagine that we were limited to learning from essays of 8 year olds. At some point human data will be insufficient and self thought and exploration are the answers. This is what in the field is generally labeled ‘search’ and often techniques involve RL like AlphaGo did.
I think the obvious answer to when computers achieve AGI is when they start generating new ideas and insights that no human mind has thought of before - in every field - on it's own without being prompted. When it goes from a regurgitator of data into a free, creative thinker. That's the standard for humans, should also be the standard for machines.
That's not what AGI means. You are a non-artificial AGI even if you can't come up with important insights. AGI doesn't mean "superhuman capabilities". "Without being prompted" is just a whim imo. A system that can only reply could totally be considered AGI if it replies well enough. LLMs don't regurgitate data, they don't work like that. It's surprising the amount of people that still continue to misunderstand how these systems work.
@@MrTomyCJ My standard of AGI is different than yours, i'm not wrong, we just disagree. The smartest people in the world cannot agree on this definition, there is no consensus, so though you may like to, you can't speak from authority. For me, the obvious standard is something that's a long, long way off. For you, the definition is something that is achievable possibly in this decade, and may have even happened already. I never said "superhuman", I said it should have, at least, the same standard as human thought - spontaneous creativity and insight, without being prompted.
@@sabloff Yeah, there are almost as many definitions of AGI as there are people who use the term. One very important objective threshold is "Seed AI." That's the point at which a system (whether it's "true AGI" or not) is able to autonomously do AI research and build an AI that's even better at AI research than it is (or improve itself directly). Once that threshold is reached, recursive self-improvement occurs. Whether it would be a "hard takeoff" / "intelligence explosion" is up for debate. It might get slowed down at first by the limitations of experimentation and (hopefully) human oversight. But if it's allowed to continue (or if it escapes oversight in order to continue), a superintelligent agent will be created, and the world will belong to it--not to us.
Fuzzy Logic > Neural Nets > Machine Learning > Deep Learning > AI > AGI. Yep, there is a technical evolution across this alphabet soup. However, following the money also offers another explanation.
AI doesn't mean "very smart", AI is any system that simulates intelligence to some (almost any) degree. Even videogame NPCs are considered to be a form of AI.
The deeper question is: Why had all the technology not allowed us to live a life like a lord of bygone years dedicated to the pusuit of art and knowledge, free of the anxieties of worrying about making ends meet? In the same breath that Tech Bros talk about AI, they also talk pro-authoritarianism, cutting back the benefits of the state for the masses and about the need for the many to suffer.
Shrinking the state means even less tax for Musk, so he becomes even richer. To achieve this, he is currently buying political power so your vote is worthless. You heard it on RUclips @SnapDragon128
AGI being far away sounds like good news. I have a love-hate relationship with AI. The current models are useful but unfortunately people have started abusing it and spamming it smh
Yeah, let's waste Tera-Watts of Energy on useless prompts, when humans could do that just to win an arguments. Where are all the environmentalists when you need them.
they will come, when sh*t really hits the fan. came to add to this: Yeah, lets waste tera-watts of energy on a system, to endup with a good looking answer that just doesn't work or makes any sense, lol. Its already my life using AI and some agents: he goes and he does it, everything looks good, but then, opsie, its actually wrong, either we have to fix it, or start over. Now put that to scale, like using it for bio-engineering or something, as a company you say, lets go! burns god know how much money, to have an answer that: looks good, no one actually understands why its wrong, but it is wrong anyway. Looks like a bright future! lol I can see some arguments strengthen on this, humans might actually become even more valuable with AI/AGI (which is not yet AGI). Humans + AIs is the answer, we are already very efficient, plus efficient AI, nothing can beat it (cost-efficiency wise)
5 дней назад+10
My definition of AGI is simple: The one that can acknowledge its uncertainty by saying "I don't know" rather than generating false responses through hallucination.
Please don't undervalue delusions and deception. There are four horsemen of insanity, perception, hallucination, delusions and deception. As I think about it those four concepts are also the foundation of institutional valuation, and perhaps cultural valuation also. (this is an attempt at humor and it came out too dark)
This is very down to earth and anecdotal, but I was using several text-to-image models yesterday to produce something very simple: a wide and long bamboo roller blind that is used as a photo studio backdrop - meaning it (a) fills the full image background and (b) extends onto the floor. Didn't matter which model I threw at it or how I phrased it, none of the models could let go of the associations it was trained for. What I asked 'simply didn't exist' (read: it didn't have enough data samples to create a pattern), so they all insisted on giving me what I wasn't asking for. In the end, to solve such a case, either you have to jump through hoop after hoop, building made to measure hacks through the problem, or you have to train a (sub)model specifically for that task. While an actual intelligent model would just understand the logical concepts used, and produce the image. So no, AGI is not around the corner just yet. Today, the joy AI brings me is on par with the frustration it causes. That is NOT a good business model 🙂
Nah, what's missing is any understanding of the problem they want to solve. It's like self driving cars. When Sebastian Thrun et al demonstrated automating some of the mechanics of driving a car, and created a course on Udacity it was clear they were all deluded that they'd more or less solved the problem and they fancied it wouldn't be long before our roads were full of self-driving cars. At that point Google and a bunch of both tech and car manufacturers started throwing money at development. Most of them were perhaps a little more sane than Altman, barring perhaps the other delusional stock bubble idiot Musk who defecated complete and total nonsense about how soon Tesla would have full self driving. And where are we? We're no further forward. The more we've spent on self-driving cars the more it's clear we didn't understand the problem and the further away from full self driving we are. That its looking likely that you need AGI or something of that level to be able to automate driving on roads with human drivers. That human beings are bringing far more to driving around than just a bit of muscle memory to wiggle the controls and being able to analyze what they are seeing outside the windscreen. Well AGI and AI is basically at the stage where self-driving was - it's at the stage where they are drunk on the kool-aid and deluding themselves that we're nearly at AGI - but we're not. We don't understand the problem let alone have any clue how to solve it. Specifically the important thing to note is that, just as riding around on a horse wasn't a step of the way towards inventing the car, current AI is not a step towards an intelligent machine. Whatever you need to do to make a machine think, have intelligence, be self aware etc etc is an unknown - we don't even have a good grasp of how this works in people and other living species. We can barely define some of the terms. Specifically too, the idea that if you throw a few billion at a company with AI in their name they might discover it makes no sense. There's no evidence they have any idea where to start to solve it. All the evidence we have suggests that they are just riding a wave of hype created by a computer system that, at great cost, can spit out plausible looking bullshit in response to text prompts.
that's pretty interesting, you might be onto smth here. We're working on smth called daydreamProtocol which will allow agents, like looped LLM's, to play all kinds of complex games.
@michael1 it's ugly and regrettable that your comment denigrates someone you don't know at all, so you can make a partially correct statement in a subject you aren't expert in.
@@aaronjennings8385 If you saw something ugly aaron you were probably looking at a mirror. There are no "experts" in agi because it's something that doesn't exist and no one knows how to create it. Anyone claiming to be an expert in AGI would be as stupid as someone in the 1700s claiming to be an expert in internet networking - but not quite as stupid as the morons throwing money at him because they think he's an expert.
Seriously, you have two sources to oppose tens of world experts, one of which claims that AGI will arrive in the next 6 years at most (so not 'never ever' like you baselessly predict but very soon actually. Just a little longer than others.), and the other is known for being a baffoon talking about AI with absolutely no knowledge regarding the matter and a track record of wrong predictions dating back to almost 10 years! I know you can't be fully unbiased but that much is ridiculous and unprofessional!
Honestly. LeCun hasn't taken responsibility for how much he has misled people, either through a wrong model of the world or through malice. "AGI never" hasn't been a serious position in at least a year or two, and "AGI 2027" is starting to look realistic (or already did, if you did the math and updated your expectations accordingly). More than that, most AI researchers say that there's a reasonable chance AI could drive humanity extinct this century (or maybe this decade). Center for AI Safety (CAIS) Statement on AI Risk: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." A global priority? What? Who on earth would sign such a thing? Turns out, most of the top computer scientists in the world, and even the leaders of the top AI labs! We can't let them create an uncontrollable technology, and the org PauseAI has an actual plan to stop this from happening.
Prof. Włodzisław Duch, polish physicist, is specializing in applied computer science, neuroinformatics, artificial intelligence, machine learning, cognitive science and neurocognitive technologies. He was researching that topic for decades. He is quite confident it is only a matter of time (or actually: resources scaling).
We should focus more on AI safety. Civilization ending catastrophies caused by a bad use of AI do not require AGI. AGI may lead to something we cannot stop or control any more, but so can a bad use of 'regular' AI. So the relevant questions are: how we stay in control (with future and current AI) and who exactly stays in control (democratic institutions or some totalitarian system). We treat AGI as the most important milestone in this race, but we can create uncontrolable problems much sooner.
"The two companies agreed to define AGI as a system that can generate $100 billion in profits". This says it all.
Machine can't achieved AGI because u need human live intelligence an government id to crack agi 🦾🌍 then agi will be achieved it will be not machine it will be human holding agi system in his mobile an he will call super intelligence human being 🦾🌍
Well, only because, as with cryptocurrency, a fool and his money are soon parted. 😂
It's easy to be cynical at such a definition but the point is when it can actually do work AND generate _profits_ (not just revenue) of such magnitude then it's pretty much guaranteed to be AGI.
Like, the world isn't going to spend $100 billion on chatbots or image generators in any relevant timescale. But I agree that it is a silly way of defining such a thing. It's like someone in the late 1800s defining a "family car" as a motorised vehicle that generates $1 million in sales.
The purpose of the definition seems solely to finalize the details of the extant Microsoft/OpenAI deal. It sets a cap on the possible returns to Microsoft around $40 billion after recovery of its $13 billion investment.
@@CatsAreRubbish no, not in the slightest. LLM's and related technology actually become pretty useful even before they reach AGI. But it is highly unlikely they will reach AGI anytime soon.
The zeroth "super"ability that a system needs to possess to be AGI is a simple answer of "I don't know" when it ineed does not know something.
And since the more you know, the more you don't know, the AGI should be saying "I don't know, but approximately..." a LOT.
Instead, it aims to be the best used Ford truck salesperson it can be while waxing virtue signals about how it can't offer some answers for "ethical" reasons.
Why does nobody ever consider what artificial stupidity has to offer ????
I think the people are starting to loose that ability themselves, thus pseudo AI becomes more humanlike without even trying.
We are starting to think in headlines and clickbaity announcements, instead actual of content-based thoughts. The more we're afraid on the inside, the more self-assurance we project. We are always sure of things, we always expect a most optimal outcome or even a miraculous one, we always try to say that will give us the best reaction in the listener. Delivering the answer that just satisfies the one asking the question is just a step away from what LLMs do.
Claude 3.6 does this already
It's funny how everybody is judging how close or far we are from AGI while admitting that we don't really have a concrete definition of AGI
Yep. Mental masturbation. Worse, every time AI gets better, they move the goal posts further away. No one can admit we are not special.
They can create a definition that will mean they have reached the goal, or initial goal.
We'll be arguing over it for much longer than there will be any appreciable difference.
We don't have a concrete definition of consciousness, either. Are you conscious? Are you sentient? Prove you're not an AI account.
I have a rather concrete idea of what I would call an AGI, personally I think that when we make a program that is fully autonomous that can be given a problem then it finds data on the problem then prioritizes which information it will take from that data to learn from and to try and apply it to the problem and learn from the attempted application do that recursively it will at least be very close to what I would call an AGI.
You can't accurately measure something that is not yet clearly defined.
Their unit of measurement is the dollar
This is an underrated comment.
Its clearly defined by some people, and we will know when these systems become embodied.
For now they are proto.
@@TheReferrer72 what people?And what is their definition of intelligence?
@@TheReferrer72No we can't define conscious yet
Heck, we don't even have a good definition for human intelligence. Arguably, we don't even have a good example.
That's because we're really careful about the concept of "intelligence". AI merely needs to reach a crazy level of proficiency and a resulting perceived consciousness for us to wonder whether it has become a digital being. One step further and we'll have little reason to question its consciousness.
@@captainobvious8037 I think it has as much to do with autonomy than problem solving. for example if i just let this new agi run and have a bunch of data streaming at it (like what a human experiences) will it do something like ask a question and wait for a response and then try to do things on its own if i dont answer or do absolutely nothing? I dont think weve even started to measure an AI's ability to take productive initiative which is a huge part of what we actually want from AI yet i never hear anyone ask that as its a key attribute of what makes a human worker valuable and is not 1:1 correlated with intelligence either.
THIS! Before AI, we need NI. We need to know what makes intelligence in the first place. Even if we should not be able to replicate or change it in-vitro or inside the human brain that would be fine. Same as fusion power basically: We cannot make it (aside from using nukes), but we know how it works and have known so for almost 100 years now. We are far, far away from this point of knowledge when it comes to NI
True, but if an AGI is useful to humans in day to day life, including work, then it’s been achieved. Don’t over complicate it.
@@mikezooper My car is useful, it's not intelligent. A snow shovel is useful, it doesn't even have an algorithm.
Yan LeCun is famous for saying "If you want to work towards creating human-like intelligence, don't work on LLMs."
He's working on architectures called JEPA (Joint Embedding Predictive Architecture). These can learn without language, just as most animals do. They create models to make sense of data and constantly update these models as they recieve new data to improve their usefulness/accuracy.
This is much like humans and other animals and is probably at the root of our general intelligence.
Early days yet but results so far are very encouraging. I think he's on the right road.
Yeah. F* marketing! ;(
Ultimately, self-awareness requires feedback mechanisms. That's why the internet will never become aware Creativity is just modeling with limited resources,( if you know everything).
Developing a reward system as motivation, like Dopamine in physiology, may prove to be the most difficult concept to apply.
Thank you for bringing him back to my attention. I looked him up years ago but had lost track
Humans are the only animals that possess language. Why do you assume that language isn't the core reason for our intelligence?
Many people seem to overlook that human intelligence developed organically over millions of years. Starting from microbial life, evolution has gradually woven a highly complex intelligence within us. It's conceivable that a quantum computer might simulate such complexity more quickly, potentially creating something that more closely resembles true conscious intelligence.
Children learn and make sense of the world by interacting with their environment, often through trial and error. They might try to eat inedible objects or fall numerous times before mastering the ability to walk. This process of failure and adjustment is crucial to developing competence, and a child's brain is in a constant state of growth and adaptation.
Expecting Large Language Models (LLMs) to become sentient is, in my view, akin to expecting an artificially created eyeball in a lab to see of its own volition. LLMs are tools designed to perform specific tasks based on the data they are trained on. Their purpose is limited to those tasks, and they lack the intrinsic will or drive that characterizes human intelligence. The data they're trained on, primarily written and verbal language, is inherently restrictive. For LLMs to achieve anything resembling true sentience, they would need to incorporate a broader range of sensory inputs, including visual, tactile, and possibly even imperceptible senses.
I personally find the comparison of a ten-year-old who can fill the dishwasher a lousy example. The ten-year-old had 10 years of training in understanding the physical realities of the world and his own body. Granted, humans do not need as much training data and are much more efficient with that. But we can train our brains in the world because of our bodies. On the other hand, computers are totally detached from that reality unless trained in digital twin virtual worlds.
Yup. LeCun is a contrarian who will always move the goalposts in order to be "right". Marcus is basically a troll at this point. I'm not saying LLMs *will* achieve AGI (much less ASI), but these two are just as bad as the folks who claim to know that AGI is coming next week.
True.
wrong ! it is a good example, coz ai doesn't have to worry about stink, or walking around debris or junk, or dirt, doesn't have to worry about hygene.. and besides! what does it ever have to worry to begin with? a thing cannot be as human or better unless it is subject to what we are subject to .. and sabine! if you blanket like subscriber posts irrelevant of the content, well...
Our new understanding of Physics is based on though experiment. We can't go beyond 3D world space, but we might create AI who can.
It has experienced these emotions through texts.
You would only call it replicate worry.
Thanks!
My continued appreciation for this channel. I don't do rebills, because I lose track of them, but I expect to continue every 6 months.
Two companies agreed on that AGI means "Algorithmic Giant Income" 🤣
Maybe it's not AGI but it doesn't need to be be to have a huge impact. It seems smart-enough to do most white-collar jobs.
@@josephgraham439 That speaks more about the most white collar jobs.
Yup, and this is why the new hotness is AI agent employees. I suspect they are mostly hype for now, because most jobs have nuances that are hard to capture and require ineffable things like "common sense". But there definitely will be impact in the next few years.
This thought process is why multimillion dollar businesses are being ran into the ground. Tech illiterate execs just asking ai what to do and thinking it has any reasoning capability at all.
It only seems that way on the surface because its always highly confident in its answers. Be an expert in a field, and ask it specific questions about that field and you'll see LLMs fall on its face. It's still amazing, but not as amazing as a human.
Ask chatGPT how long is a piece of string (to 3 decimal places). It doesn't pace out the distance across a space, it doesn't chuff with effort, it doesn't have skin in the game.
4:50 "Because fu¢k safety" omg LOL Sabine
😂
LOL! I was just about to post that exact comment!
Love it
🤭
Direct, concise, and brilliant! 👏
Thank you again, Sabine. Calm, measured explanations. And your so dry humour. What's not to love?
Altman saying "we know how to get to AGI so we're going to start focusing more on that instead of our current products" means "We want you to think our current products hitting a wall is because of neglect, not a limit of the technology"
Where on to superduper intelegence now
You got it."AI" is autocomplete bruteforcing stuff with tons of data.
What wall? Did you read TTT by MIT (Nov 2024)? Haven't you seen the results of that yet? There is no wall, synthetic data is the new s curve till we hit ASI.
@@L9INO9166 A lot of researchers get grimpses on it.
this is dishonest, because sam altman has publicly stated the limitation of previous scaling methods. "Altman suggested that the research strategy that birthed ChatGPT is "played out" and that future advancements in artificial intelligence will require new ideas"
A lot of white collar work is more concerned with process rather than actually engaging in novel problem solving, so although I don't think something like O3 will replace white collar workers, I think it will massively reduce the number of white collar workers needed for a particular business function. Work in a team of 5 people, in a year or two, that team will be 2 or 3 people.
Remember, the number 1 cost for most organisations is labour and they have every incentive to cut costs.
It will be more like shrinking from 5 to 1
A lot of office work is very repetitive, even on management level.
@@satchillananda not really. AI can quickly generate a code snippet, but if there is a bug, or if it doesn't fulfil the business requirement, AI is not going to help much. so you have to start debugging the code generated by it. so now suddenly your team will start working on fixing bugs, and addressing business requirements. so again you need same or more number of employees. this is however only trust for small to mid level businesses, large organization might need more employees as they have complex code bases. so I predict, small and mid level size companies will save a lot of money, but they won't hire more people, and they will end up joining large organizations.
I suppose it depends on the nature of the business, but in my experience with working on projects for customers who are other businesses, the novelty level is always high, meaning that the customer either wants something which hasn't yet existed in our software, and/or it will require troubleshooting a novel situation. The reason for that is that the pace of change in various apps used by our customers is incredibly fast, with each upgrade yielding different results, interfaces, etc. Believe me, sometimes I wish the 'magic of AGI' could be deployed to easily resolve some of the more intractable issues we have with the LLM being used for our product. My only concern is that naive C-Suite and the Managerial class aren't knowledgeable enough to understand these limitations(or some will know but not care).
That is an illusion. Ever worked in a big company where part of the work such as automated placing of complex orders (this is not something like ordering something online or in a shop, but in the sense of each step in the order steers one or more teams to perform tasks at the correct and most efficient timing). Forget that AI or AGI can do this at the moment.
Example: Client asks for a telecom line connecing point A to B, both located in different countries. Go and dig up some information on what effort is involved in this. That is white collar jobs. They wont be replaced by AGI anytime soon (they may be assisted though). Too many people see white collar jobs as profiles as an accountant or so. That is just a minority though.
And still: even an accountant... Try and replace him by AGI that knows all rules and see if your company will still make the same profit after taxes... doubt it since the engine will lack the creativity of a (good) accountant.
It will be the same as it has always been for the coming decades: people will get smarter. A job that can only be done by a select group of people due to its complexity right now, may be called a simple job within a couple of years.
ChatGPT: sure you can copy my homework, just change it a little
China: *I am ChatGPT*
Identity confusion like that is actually fairly common in AI models. "Out of 27 AI models these researchers tested, they found that a quarter exhibited identity confusion, which "primarily stems from hallucinations rather than reuse or replication"." Also keep in mind that this model outperforms all other AI models in almost all benchmarks (and completely smokes all in math performance), which is kind of impossible if the model was just a copy.
@aiuifohzosfdh Grok and Claude know they are themselves, not being aware of ones self just means being less self aware imo
Everyone talks about ChatGPT, so obviously it's all over the training data, but not quite understanding the context there is still telling imo
@@nyanbrox5418 no Grok and Claude don't know anything, they predict the answer most likely to be maximally satisfying for the user. Deepseek does the same, and if you ask its identity it will say its deepseek. There have been issues tho, due to its testing data including chatgpt outputs on the internet. Many models as proven by scientific research show the same "hallucinations". The fact that deepseek outperforms literally all other published models, makes it obvious that its not just a copy. How can a copy be better?
We'll know we've got AGI when it starts asking questions instead of answering them.
Eh, Cleverbot could ask questions.
If it starts taking action, then I'd be scared.
I can build an LLM agent that asks questions in 5 minutes. We need better metrics.
@@NeoSHNIK i would say then that it needs to ask questions based upon internal prompts not guided by predefined rules: ie it needs to generate a form of "want" for the knowledge not "ive been told to ask questions on this stuff"
@@TheWorldsprayer Yeah until it can think on it's own, it's not AGI to me. It can't rely on prompts to process information; it should process information on it's own constantly/consistently and as one complete mind with access to all its parts at once. It should be able to send me a message when it "wants" and then I'll be open to it being an AGI.
A big missing factor is to continuously update the model according to its interactions. Currently not really feasible to retrain indefinitely but we might very well reach AGI just scaling what we have.
6:00 I think that the act of living is a ton of learning that doesn't usually get mentioned. Living in the world from zero to ten is training. Being social with people and parents is training. Moving from one side of a room to another, experiencing a sunrise, getting burned, is training.
There are probably thousands of data points needed to walk...
@@JMSouchak more likely millions, given the sheer amount of touch, pain, position, and motion information that gets relayed from our body to our brain
@@eveleynce Exactly; I think people often forget the absurd amount of training that living things go through doing what we very biasedly consider easy.
@@JMSouchakAnd there are currently robots that walk and move way better than any of us do. Robots are achieving super ability in such things currently.
But still human tend to see analogy as fast as after two examples, we can compare everything to everything and without need of thousands examples. AI is uncapable of that
Ignores LCM and tries to cite LLMs as AI. AI is not an LLM. It is a system of components. Anyone trying to denote that an LLM is AGI by itself is ignorant of LLM routers, complex systems, and interconnected networks.
We already have AGI, it is just not in the form of an LLM.
2- Ignores LCMs, Ignores federated systems, Ignores TTT, Ignores transfusion models, Ignores facts about the ARC benchmarks were average person scores 67%, Ignores bot-nets possibilities, Ignores many other advancements like google's new paper on regression learning.
3- take it from me-AGI is here, and we're actually working on ASI frameworks now.
If my understanding is correct, to pass arc tests, one needs ttt instead of LLM. If that's correct, Sabine completely misses in her assessment by saying we're hitting some data wall as ttt isn't all about (initial) data.
AGI needs $2,000 to finish one task, but a human just needs a McDonald's hamburger to fuel up and crush a whole to-do list
$3000*
That depends on the task. You wouldn't need to spend that much to make a robot do the dishes. If we're talking about researching the cure for cancer for 1 month, then maybe you'd pay a human a lot more than that.
So, considering MickeyD's prices lately, you're saying AGI is cheaper?
Came to the same conclusion. And give me really well made T-Bone steak instead of Mc Donalds and I will astonish you with what I am capable of 🤣. Really, you would be surprised how good motivation can boost the human brain to go above and beyond
Wait until the photonic chips take over, 1,000,000x more efficient
You cited Yann LeCun and Gary Marcus, the two biggest AI skeptics in the world right now, both of whom have been proven wrong to an embarrassing degree multiple times over the past few years-all while failing to cite the _dozens_ of leading AI experts who are NOT skeptical and who believe AGI is not far away at all. It feels like you just cherry picked the two guys who confirm the personal belief that you already hold. That’s fine, but you shouldn’t present the video as if it’s an objective analysis of the current situation. Objectively speaking, there are far more leading AI experts who believe we’re quite close to AGI than there are those who believe it’s very far away.
@ First of all, that’s one survey and it’s from 2023. Given the pace of recent AI advances, 2023 is a lifetime ago-go read up on what the leading lights in the field are saying now. Virtually all of them have significantly shortened their timelines.
Yann LeCun has _absolutely_ been proven wrong multiple times over the past few years. To give you just one example, he confidently, stated during a podcast interview, that an LLM would never be able to accurately predict what happens to a glass of water on top of a table if you move the table. I repeat-he said an LLM would literally _never_ be able to answer this correctly, because LLMs are trained on text, and therefore don’t have a true understanding of the physical world. Not only was he wrong in saying it would _never_ happen, but he was wrong at the moment he said it-before the podcast had even ended, people were already posting screenshots of them asking GPT-3.5 this question and it nailed it on the first try. So he just speaks with a completely unwarranted degree of confidence, on things he is very often wrong about.
Lastly, as to my own supposed lack of understanding-I’m an AI engineer and have been working in this field for nearly 20 years. I have a PhD in computational linguistics, and at my current company the majority of the work I do is on LLMs. I have been published multiple times in peer-reviewed journals on the subject of AI, and have taught college-level courses on machine learning, computer science, and other related topics. So I think it is perhaps you who should “take a look in the mirror”; I guarantee you that I am more knowledgeable on this subject than you are.
And of course the most informative comment would be ignored to oblivion, I just wanted to comment so you'll know that there is a growing crowd or lurkers that agree with everything you've typed, but don't bother arguing on the Internet because people are in denial of their own biases
Wanted to add another comment agreeing with you. Skepticism of AI is trendy right now but it’s not necessarily the case that AI progress will slow down
You seem to "know" at least enough to be able to pinpoint the lies in her presentation, but you limited yourself to point a superficial mistake.
Sabine, I know "Physik" is your wheelhouse, but your intelligence and dry sense of humor lends itself spectacularly to informed skepticism of AI news!
Sam Altman : " we need money....sorry , i mean we have AGI"
But he never, ever, said that. Attacking strawmen is crazy!
5:04 As the creator of ARC said, "Calling something like o1 an LLM is about as accurate as calling AlphaGo a convnet". o3 is the alexnet moment for program synthesis.
5:10 "Most humans can solve ARC problems without ever having seen anything like them". That is exactly what o3 did and the way the ARC test was designed.
You're recommended to train on the public test set though, because this saves time making sure your model doesn't struggle with the JSON format or understanding what you mean by grids.
The important part is the private eval contains novel general reasoning problems that we don't have anything online to train on, so the only way to pass it is to learn the abstract rule at the moment you're presented with it, just like children do.
Regardless, passing ARC doesn't mean AGI anyway. It's a necessary but not sufficient stepping stone on the path to it. We probably should consider it a matter of degrees rather than a boolean too. They clearly do some general reasoning now, but it's very simple.
@quantumspark343 Yeah there's a reason the average human baseline isn't ~99% because some of them are hard, but most of them are intended to be "easy to be solved by humans".
I don't necessarily mean simple in terms of how hard the tests are either, I mean simple in terms of generality. This is kind of like it can learn to ride a bike by adapting the priors learned from walking to this new domain, so it recognises that dynamic balancing and inertia are applicable in a domain that's a type of exercise, but something much more general might discover quantum mechanics by running classical physics experiments that don't add up, proposing entirely new experiments, coming up with hypotheses that contradict priors learned in other domains, and developing an entire field of research before even establishing anything it can count as ground truth.
Human brains are pattern regonition maschines trained by Evolution. Its not like we solve arc in a vacuum.
so where is o2?
Yah. Also I don't know why I watch this Video. It is very clear that this woman is from a different field and does not know to much about the current research
@@tombodenmann980 Evolution? Why are you missing what we learn through our lives?
How close are we to the thing that has no definition? :/
This time I have several arguements to make against the content of this video, the podcast clip with Yann Lecun for starters was from a year ago. He himself has pondered on weather or not reasoning models are enough to take us to the next stage, broadly speaking.
Microsoft making that silly claim about AGI is only so they can prolong their partnership with OpenAI. This deal put simply, prevents Microsoft from using OpenAI's products for profit, after AGI is reached. And as such they want to postpone that AGI definition far into the future.
I think the video has been phrasing things and interpreting its arguements backwards. It's not that A.I. companies are far from "AGI" so they are going to make more abstract claims about it, it's rather the opposite. That we are getting to the point where it may as well be harder to benchmark a model than it is to benchmark a human, because these models are getting to the point where they can match us at any task.
Yeah, LLMs are not the pinacle of intelligence, they work in a very indirect way to directly capture intelligence and intuition... but guess what, a giant monkey can be as dangerous as a man with a gun.
Still more intelligent than most of my friends.
More intelligent than me too.
not from "my friends" i have learnt to "filter"
Good job hedging by saying “most” 😅
Subhuman Intelligence
As far as that goes, I think it is already "generally intelligent" in the sense that it can respond to a question better than many professionals. To me the next evolution in it's usefulness will come with some form of memory and persistent context.
Slight correction, the strategy you mention is not just chain of thought. It's also tree of thought, where there are several chains of thought in a tree and it finds the optimal. Chain of thought is a single branch in that tree.
Thanks
O3 doesn't use tree of thought.
Lol prefer removing the safety clause because "eff safety'". Sabine is the best.
You have to differentiate two things:
- The traditional AI learning approach
- Test time compute (give models time to think)
The first thing might be slowing down. The second does not and will become much more efficient in the next months. There is also a concept called test time learning (fine tuning while thinking) which has even more potential.
So no, overall AI is not slowing down and we will continue to see huge breakthroughs in the upcoming month.
As for AGI, it obviously just depends on the definition. But I am also pretty sure that almost every definition will be fulfilled in the near future.
It reminds me of when I was a kid (unfortunately, I'm that old) and people talked about the missing link (between humans and apes). Later, that search lost its significance. Part of the conceptual problem was discretizing a continuum. I think something similar is happening now with AGI.
How does a VPN solve the issue you've mentioned?
It helps her get closer to the $100 billion mark so that she can officially become an AGI
Sabine, I’m sitting here, pants around my ankles, staring at the bathroom wall like it holds the answer to this relentless cycle. Something’s off-IBS, anxiety, OCD, who knows? It’s like my brain’s hardwired to sound the alarm even when the tank’s empty. I get up, I sit down, rinse, repeat, fully aware that it’s pointless, yet here I am, stuck in this absurd ritual. If I didn’t work from home, this would be a disaster-a humiliating, career-ending parade of unnecessary bathroom breaks. Whatever it is, it’s maddening. A quiet, personal war waged in porcelain solitude.
I'm really sorry to hear that. I may just be a random stranger on the internet but I just want to say you're not alone. In fact, I have a friend who also feels an irrational urge to go to the bathroom, albeit at night when it disturbs their sleep. That being said they've gotten a lot better as their mental health has improved thanks in great part to counselling. I hope you also find some help, there is no shame in being cranky, and wish you all the best.
@chalichaligha3234 I appreciate the kind words. It's probably rooted in a quiet, constant anxiety-something that doesn’t really scream on the outside but makes itself known in strange, physical ways. I’ve come to realize, in a very intimate way, that anxiety and depression don’t always show up as feelings. They like to twist the body instead. That said, life’s pretty good right now, and this odd little truth doesn’t interfere much. And by the way, you’re a truly nice person for saying that.
Well, one thing is for sure, NordVPN will _not_ protect you from artificially generated code.
It won't protect you from anything. VPN is not a security tool.
Meanwhile if I ask openAI to draw me a picture of a watch with the hands at 5:38 I will always get a watch with hands at the 10 and 2 positions. The reason is the overwhelming number of pictures of watches out there are shown with the hands at 10 and 2 - it's the most attractive for a watch. That's an indication of overtraining. AI is very dependent on data - all you data scientists out there are probably already feeling it. Garbage in, garbage out. It cannot self correct. That would be AGI to self correct. It cannot do that. AGI will also create it's own data for training purposes. The current level of AI is nowhere near that. It's a *transformative* model. That means it transforms existing data. It cannot create new data.
It hugely depends on compute. I code with it sometimes. At weekends it’s creative, seems to understand much better what I’m asking for and makes few silly errors. At peak times it’s like a drunk junior developer.
This is why we need millions of humanoid robots walking around, experimenting and exploring like children to gather real-world data.
GPT is an LLM. It doesn't draw. It just prompts a different AI to draw (DALL-E). They are entirely different AIs. You could prompt DALL-E directly and get better results, probably.
Underrated comment
If you tell OpenAI o1 model to draw a picture of a watch with the hands at 5:38, it draws a watch with the hands at 5:38. You are using the wrong model. Try it yourself. It will also write python code or Javascript code or SVG or whatever you want to draw the watch with code.
And that isn't even o3. It's like saying mixed mode LLMs can't do math because a couple months ago they could only do graduate level math at 2%. That bumped up to 25% by the end of the month. And that's graduate level, not basic. The good ones are perfect at basic math. Every time you say "they can't.." you will find out either they can and you don't know it or they can in a few months. The new models self correct. AI architectures aren't done, they are always getting better.
in a world filled with "fast food" information, this is such an invaluable source of knowledge, this youtube channel, ... thank you Sabine!
LLM's will not get to AGI because they do not have 'on the fly' memory or learning capabilities. All their learning occurs in the training phase.
That's totally true, but it's crazy how we can still simulate on the fly learning with large context windows. When the AI can "remember" so far back in the conversation, in practice it acts as if it has learned.
There are a couple of articles about "in-context learning", you might want to look at them
@@MrTomyCJ I use LLMs all day with programmings, even with a huge context windows it starts to go off track very quickly and you need to start fresh with it. Custom GPT's and Claude's version also do the same. You can have a 2 million token context window but if you wont get anywhere near using all that as it will be so off subject/task by that time it will be just spitting out nonsense. They test them by feeding a whole book to the LLM in the prompt and asking it a question on what's on page 400 or something, which is fine but when it needs to work out stuff and stay on task or keep a certain format and remember to stay in that format, it soon starts to go wrong. Try a large document in googles notebook llm, that podcast audio can start to say things that you never supplied it, it's quite comical.
@RobertsMrtn In-context learning, Attention, LSTM and so on .. is what you called short term memory.
That's only true because the models are too large. Eventually with enough pruning and find tuning, you can add training. You don't need to retain every layer. Certain layers would be fixed, while others can be updated via backpropogation in real time.
I don't agree that humans are able to solve ARC problems despite not seeing similar problems. The problems may look new but in reality they will have similarities to many other problems we've seen before.
Yeah, they are coping so hard.
Pattern recognition seems like the basis of human intelligence. This has always seemed like an obvious truth to me.
When you copy your classmate's homework, but accidentally copy their name too: 2:22
AI: ANI=limited, AGI=GANI=broader, GAI=human-like, SAI=beyond-human
AGI used to mean "like human", but the meaning got twisted so that it can be claimed, so now
GAI (expected to become SAI) is what used to be AGI.
A = Artificial
N = Narrow (limited to specific task)
G = General/Generalized (AGI now means not so limited but still unable to generalize beyond its less-limited scope)
S = Super (exceeding human)
I = Intelligence
How do "expert systems" fall into these categories. Most industry uses some versions of these.
Ooooh, how about "Artificial Narrow Super Intelligence, Generalized"? That'd be "ANSIG"!
...
I didn't notice we weren't making abbreviations here. Aaagh.
@@brunonikodemski2420 I would not even call it AI (but programming... and yes, I am a programmer, firmware/embedded)
"Expert systems became some of the first truly successful forms of artificial intelligence (AI) software."
AI for me means "some form of machine-learning", not programming in Lisp/Prolog/Haskell.
The way Sam Altman manipulates everything and everyone is amazing. He is really a smart guy.
Nah he isn't, ppl are just dumb af and believe anything
Why he reminds me of Theranos?
Yeah, especially all those bozos at openai who unanimously were going to quit if he left. Good thing we are way smarter than those dummies.
He is part of the same Tribe as Sabine Hossenfelder
@@jimj2683 at least Sabina is funny.
I think this video lacks an explanation of the new scaling paradigm OpenAI achieved with the o-series models. They can now scale test time compute on top of pre-training compute. If anything the inmense jump in capabilities o3 shows compared to o1 shows there is no scaling wall yet. They then can use huge amount of inference compute on o3 to create massive amount of reasoning synth data to train a better base model in o4 and the loop starts over. I think you should have posted some of the quotes from o-series reasearch scientists like Noam Brown, they really seem to think this rate of progress will continue at a much faster pace than traditional pre-training scaling.
If anything it kinda looks like we will reach ASI in complex tasks like math or coding before reaching an AGI that satisfies every possible thing a human can learn.
Not only lacks that but offers a lie instead.
In real world applications, O3 is a step back. For real world programming it is absolutely worthless, even worse than ChatGPT4o, which is not that great, too. Standardized tests that the O3 model has been trained against are irrelevant in real life use cases. But the cost of these, at best dimishing returns, is increasing exponentially.
I hope that if AGI is ever reached, it will be too expensive to replace humans
AI is already an overused term that doesn't mean what it is supposed to mean. Therefore, AGI should mean what AI is supposed to mean. Namely, a machine intelligence that has sentience akin to humans.
Wife’s brother doing a quiz today, and couldn’t figure the name of a musician with the letters shuffled. I tried Gemini and Copilot and they were hopeless, names that didn’t contain the same number of letters, hallucinated letters and names. Wife sent the same letters to a friend. He replied saying someone has taken the P, and gave a name which otherwise matched perfectly. Wife’s brother had indeed missed out the P.
I think this an illustration of what general intelligence means. Maybe solving cryptic crossword puzzles is a better guideline than maths tests.
Ps
Copilot response was weird, it said maybe this then oh no it could be this the but then etc, before confidently choosing Stevie Ray Vaughan (the letters were tddrydrnegaess).
This is because LLMs don't know the actual letters in a word, every word is converted to a number for them. It's the same reason they don't know how many Rs are in strawberry.
And he did it for free, not for 3000 USD.
Narrow AI is fantastic
Matrix multiplication, statistics and predictions is not going to give us AGI
Evolution is such a dumb process it could never produce something intelligent.
Right. The only thing that can give us AGI is artificial evolution.
Tell that to someone in 2018 and they'll tell you that o3 is obviously AGI. The goalposts keep moving. Calling a general-purpose reasoner that can outperform most humans at most short-to-medium-horizon cognitive tasks "narrow" is head-in-sand behavior.
@41-Haiku it is narrow because it's not real. No reasoning is being done. You're buying the LLM illusion, and you need to take a step back.
@@41-Haiku Until it's as general as a human mind it's not AGI, in order to be AGI it has to to every cognitive task at least as well as a human no exceptions, as long as there's cognitive tasks humans can do and it just can't it's not AGI.
One thing is to mimick, another is to understand.
Right... Which is why o3 understands as it's literally beaten human reasoning which indicates understanding of what is being asked.
That's what inference time compute is for.
The model mimics, the inference understands.
Generality is just repeatedly asking the right questions.
What is to understand? How do you measure it?
@@Gafferman does it? How does it reconcile the physical world with its data?
Congrats, you're grasping the surface level of the discussion. Next you have to realize that some things can't be measured so what matters is the outcome, or what we can observe.
companies like OpenAI have long moved on from "scaling data" to "scaling test time compute", which are very different things. at this point its pretty much obvious to most people that data scaling is very close to if not already have hit its ceiling
@1:24 one can give equally valid reasoning for both blue and the purple:
blue: echo the most frequent pattern
purple: echo the pattern closest to the right lower corner; echo the pattern containing the square 5 squares to the left, and 3 to up, starting from the low-right corner
.....
many (infinite?) rules might be constructed from 3 'source' pattern leading to different 4th
THANK YOU i thought i was stupid
@@julesboth1103 okay what's the additional assumption and what's the easiest explanation?
I thought I had been replaced by an AI for a moment there
@@Tnowion
what do you mean by "easiest", that is the question
whoever has the most goes down
I’ve never thought that any *single* technology will be involved in creating AGI-it will take more than one working in concert. LLMs will probably play a part in it, though.
Thank you for the video. I’m very glad that i found your channel!
8:24 The first thing that came to my mind was The Zero Theorem by Terry Gilliam
I use LLM's locally extensively. Follow the latest updates. Hire servers to create service language modules for clients.
I can tell you that past a certain point the "heat" and nonsense is overwhelming.
In the last 3 years while they have clarified the initial phases, improved user experience (better UI/UX etc.) and brought the cost down of use and/or ownership somewhat.
The derailing and entropy and mania that occurs past a certain point is as present today as it was in early 2022.
In fact the ease of use and speed increases means that in "time spent on your AI" terms - you are likely to encounter it even more quickly now than you did 3 years ago
Why does nobody ever consider what artificial stupidity has to offer ????
You're part of the problem
Sorry you feel like that, i do understand to a degree,
Within its limitations it's very good. My issue is people making claims well above its limitations. (to make money, to inflate share prices)
I'm almost 60 and need to keep skilled up to stay relevant for the next 12+ years until I can afford to retire. If that causes you a problem. I'm sorry about that.
But AI isn't "coming".
It's here.
My take was "Get on board, or get left behind".
The fact that we are miles away from a general AI is a side-bar to the impact iterative generalised AI is already having. No-one is going to stop now. That would be like the luddites throwing their shoes into the loom 200 years ago.
@@PaulRoneClarke My problem is how its goal is to steal jobs by stealing people's previous output while absolutely demolishing copyright law into smithereens, all the while wasting insane amounts of energy in a planet with an already collapsing environment
Imagine having a robot at home that randomly finds you and runs an ad.
4:08 no need to wait till the end of 2025, we have systems than can do certain cognitive tasks better than humans since first calculators were invented.
07:25 only one guy laughed at his joke. That's sad 😢
Hi, I'm working on Open-source AGI and I must say we're heading into the year of trust. Would you trust someone with AGI resources?
My own definition: AGI is when it can utilize completely any computer and software. This to develop itself it's own software to grow.
Also the current structure of llms are hallucinations limited by data. LLM will be replaced by a newer architecture that will have a conscious hallucination driver, this to consciously stop an hallucination.
8:04 Yup, and it will most likely be another winter for AI that will last at leas a decade :|. I just HATE this stupid hype that focuses on one certain thing and gives false feeling of completed journey. It slows down the progress as people start to wait for something impossible about overhyped technologies and throw them away after seeing their obviously stupid dreams crush against reality.
There is hype, but most of it comes from small start-ups that have little to offer, that are just trying to cash in. So, "AI hype" is really only a useful term for Wall Street. The leading AI companies are actually pretty realistic in their comments. As for another AI winter, previous AI winters were largely caused by attention and funding turning away from the challenges. There were very few people working on AI and little money being spent. Given the huge increase in the number of people working on AI and the amount of money being spent, there is zero chance of another AI winter. It's just not how the world works.
@@netscroogeDo you think that there will always be billions of dollars spent for AI? And as for 'realistic' predictions of companies i heard from sam a lot that he will make AGI and that it's coming in 2025.
"AI winter that will last a decade"...?
You people are delusional.
Where is the winter? Scores of articles proclaimed that AI was hitting a wall, and then OpenAI unveiled a reasoning model that smashed a (supposed) AGI benchmark and can solve some novel PHD-level math problems. Where is the wall? Where even is the stutter?
The approach to AGI will be like Zeno's Paradox: the "closer" Silicon Valley gets the farther away the destination will become as the energy requirements for closing the remaining 50% gap go increasingly hyperbolic. The human brain has 86 billion neurons and 100 trillion synaptic connections and can produce insights with the energy of a peanut butter sandwich. AGI is so primitive in comparison it will probably take 10,000 nuclear reactors to do something comparative.
Similar to creating artificial life. Every time we build something that was once considered artificial life, the goal gets redefined.
The energy costs are the really interesting part of the problem I agree. I can see them getting 'smarter' in most metric regards than humans - hell, I think in many respects they're already smarter than a lot of people I've talked with in certain regards. But the energy efficiency of a human brain is indeed many orders of magnitude better. The energy requirements for the current generation of AI are completely out to lunch, and as that reality sinks in with the investment community the shine may come off the entire affair with shocking rapidity.
Still, I imagine that governments are going to want them for purposes of military planning and social propaganda control, and they'll be willing to pay a great deal for that.
Would be hilarious if this crashes everyone’s economy. Trying to create an human, then old man Warren Buffet and Ghost of munger buy up everything cheap
❤
How many of those hundred million neurons are actually used in the process of generating so-called "general intelligence"? Isn't it possible that, like junk-DNA, they are merely cumulative products of the evolutionary process? Perhaps it is the way specific connections process stimuli that create AGI and not the sheer bulk of connections.
The official definition of AGI, is " An algorithm smart enough to convince Sabine Hoseenfelder, that it is an AGI" With current exponential growth in capabilities we are on track to reach this goal in approximately 300 years
It's simple. We will know AGI is reached if/when it's literally impossible to create new benchmarks where the average (possibly skilled, like a researcher or professional) human isn't able to beat AI by a significant margin anymore. Right now the ARC test is being updated and ChatGPT is projected to have a score of 30%, so we should still be good for some time. Plus the fact there are still many fields where AI is still lacking compared to professionals, like music or 3D modeling. Even 2D images are still not quite there yet, just look at all the AI slop channels that have been popping up recently.
"Literally impossible" is maybe a high bar; it's already pretty hard. And by "some time" do you mean 6-12 months? On the exponential curve we've been following, a year from 30% isn't 45% -- it's more like 70%+.
@@41-Haiku You're both saying it's a high bar yet you're also saying we're on an exponential curve. So what's your point?
1:50 wtf how does a program cost $3000 per task? electricity usage?
I was also curious about this. This price gap is bigger between gold and silver.
The new o-series of models unlocked what is called "test time compute", a new scaling paradigm on top of traditional pre-training scaling she mentions in the video.
They can spend more time "thinking" (generating more answers, more computation time so more energy used yes) to get better answers. They then can use all those reasoning tokens generated to train the next big base model, at least that is the hope and is possibly how they trained o3.
This is an mistake in the price. This is a cost of a whole experiment. A real cost of a single task is ca. $20.
Thank you! Great exposure! Kind regards!
We keep moving the goalpost. First it was the Turing test, then it was average human intelligence, and now that the LLMs are starting to outperform elite humans, we're probably going to move the goal again. I don't mind, but it is amusing.
💯 humans coping with gradually being outmatched in every field by AI. These things are held to a far higher standard than the average human. Every weakness it has is highlighted while ignoring how it's landsliding humans in most other metrics. Human ego and delusions have no limits.
@@DeathDealerX07 which problem has AI solved that was not solved by humans before? Right none so it is a better search algorithm on the database which issues were already solved by humans. If you definition of intelligence is just copy what has be done before then yes AI is probably underestimated, if you expect to go through issues with own reasoning and thoughts the AGI is still very far away.
@@Techmagus76that's a fair point but also describes perfectly what they're talking about. The shifting goal post. Every time it advances people move the post to diminish the current result. You aren't denying that it's very rapidly outperforming humans in most "mental" tasks. So instead you shift the goal to something along the lines of "well it hasn't created anything new". It's beaten every goal post placed before it so far, the idea that it's going to stop just because is wild. No one knows if it will or won't but being adamant that it's incapable is ridiculous.
here come the UBI hobos...
What's going on guys , afraid your dream of free money gets destroyed by logic and facts ? lol
@@Techmagus76 What problems have you solved that another human hasn't solved? I'm not trying to be snarky but most economically valuable work doesn't require novel problem solving or creativity, just being able to follow clearly defined business processes is enough for most white collar jobs. Most of us aren't going to be replaced by an AGI, we're going to be replaced by a dumb, inert machine in some distant data centre
I started to learn to drive the first time I got in a car as a child. Observational skills. Same for the dishwasher. How many times did our fathers tell our mothers, "not like that". Humans are learning all the time, we follow patterns, and are predictable as a result. The argument that a 17year old learns to drive in 20hours is useless, as is the 10year old who knows how to stack a dishwasher straight away.
A number of teenagers were shown a rotary dial telephone and couldn't work out what it was... yet a chimpanzee did. Do we simply say that all teenagers are stupid and chimpanzees are more intellectual than humans? Of course not.This is why defining AGI is so difficult.
Defining intelligence, when people such as LeCun make huge assumptions and assertions that are wholly unfounded solely to create a stage for themselves, is hard. The same people tell us Hydrogen is still the future not EVs, despite EVs now being cheaper to make that ICE cars, and Hydrogen going up in price repeatedly. I wouldn't want to be standing in LeCun's camp, or on the hydrogen or ICE car camp.
Many of these people have been saying intelligence is deeper than just learning a bunch of stuff in a big neutral net for years and can't bring themselves to admit they were wrong about LLMs no matter how good they get.
Great comment. I can't believe that an expert used such bad examples then another scientifically minded person though that they should be included in this video.
My mother had to tell that to men
I'm doing a MEng in Signal Processing and ML as well as a BS in Mathematics, and I've looked at some researches on joint researches between neuroscience and ML. Here's what I think.
A thing here is that current AIs are built upon statistical algorithms, designed by humans according to some mathematical theories invented by humans. The statistical algorithms supposedly represent the ability to "abstract information" and to apply those abstractions to classify 'new' information.
How current AI roughly works is that we need to instantiate the statistical algorithm with some chosen default parameters, and then we train the models so that it adjusts the parameters so that we expect "data sufficiently close to an expected input" would yield "data sufficiently close to an expected output"; the algorithm along with the parameters would somewhat represent some category of "abstractions".
To clarify on the abstraction, if I feed the image/video of a rainbow-colored dog that's upside-down to the AI, I'd expect the AI to neglect the color, extract the shape of the object, and predict the object is a "dog" or something "close to a dog" unless any pertubation such as some additional information from the picture or video that shows it's unlikely a dog. In case of a video, I'd also expect it to assume that the dog/object in different image frames is the same one regardless of its physical deformation, light variation... as long as if the frames are "sufficiently close to each other wrt to time", the deformation is continuous (yes topology), and so on. This can be a philosophical debate but naturally, I think that's also what most people would assume.
So, we can clearly see the limitations as AIs are definitely bounded by algorithms given by humans, and human minds are unlikely bounded by those statistical algorithms, but are very likely bounded by some Mother Nature's algorithms (provided they exist). Unless we can show that those Nature's algorithms exist and that there's a set of man-made algorithms that accurate model nature's algorithms, we can't faithfully say AI really captures human intelligence. Neuroscience hasn't made much breakthrough on this problem.
In addition, another major ability that I believe AI lacks is the ability to "change their minds" i.e. changing their codes/algorithms without breaking them. Although it seems possible to mimic it by having two or three neural networks changing each other's codes as well as other networks' codes starting with some initial conditions, there's possibility of one mischanging the other which might kill the whole network. The technical stuffs of those networks are quite complicated.
I'm fairly optimistic about the researches in neuroscience and AI and I enjoy the AGI hype as there are many things to uncover, but I think that we might be at least 2 decades away from actual AGI.
It's been a while since we've had an update on the dangers of Artificial Sweetener, AI's beguiling accomplice
Please, we must have a video to keep the public informed
@@sumfatdude4824 Wrong, it is a portmanteau of arty-fish-oil
The irony of Deep Seek is so wonderful.
Chat GPT was trained on basically material (copyright or not) without compensating the authors. The question of plagiarism has always orbited OpenAI and all the AI services.
Now DeepSeek comes along. From what I read, it's better than OpenAI, it uses far cheaper hardware. The total cost for DeepSeek is less than $10M. A venture capitalist once said it was "a joke of a budget".
And how did Deep Seek achieve such amazing results on a peanut budget? It trained itself on ChatGPT output.
Double Irony: Here's a Chinese company in a non-democratic country sticking to "the man", the techno-oligarchy of democratic America.
They didn't even invent the method to steal it, they stole that too lmao.
No Patrick, learning from text that people voluntarily publish to the public for everyone to read is not stealing.
Based china
it's not better, if you actually try it out
Language models are one part of AGI, but an important one. Others are strategic and logical thinking, motivation, ability to sense and understand surroundings and affect it through actions. However, even with current LLMs amazing applications can be implemented.
Many of these things can be handled trough language: ability to sense and understand surroundings and affect it through actions
Feed the AI an image of it's sorosdings (current LLMs can already proccess images) and let it perform actions by perfoming command, then just run it in a loop feeding it the results of the actions and so on.
When it comes to LLMs, you’re taking patterns encoded with values and statistically sorting them into a synthetic pattern driven by the input language. It’s more or less a straightforward concept.
But then there’s reality-it has tons of patterns and values all over the place. And I don’t know about y’all, but AGI to me has always represented something far more capable of working in such an environment, not just something that spews fractal fountains of inferred gibberish. Ahem.
Exactly, we overestimate how much information human language carries, and even overestimate more how much of it is written. Human language is too low resolution to describe the complexity of the universe. It is a tool we created to fix our practical problems, not to draw an accurate picture of universe. It is not just an engineering problem, also a data problem, they absorbed entire information digitally accessible and this is it
@@utkuapeople discuss the dangers of synthetic data-but what is it these models become once they’re baked on a rack? Language itself is reductive, expressive of phenomena intrinsically owned by the self.
@@lobabobloblaw Simulations are also reductions. You cannot even find an accurate physics engine work real-time. Most of them a re just for games, and to be reasonable enough.
Once the LLMs gain access to the sum total of human knowledge, they won’t be able to go any farther until humans discover more to feed it. That’s not AGI.
They have already sucked the entire Internet dry for training GPT 3.5 and GPT4o. The great hope is that somehow, magically, the output of O3 can be reused as synthetic input for generating more data. How that is supposed to work and produce superior outputs from an information entropy point of view is a mystery to me.
@@dominikvonlavante6113 We do it as humans all the time. It's mostly a process of generating random noise and occasionally discovering something useful in the giant pile of informational detritus. It's the ability to recognize when something might be 'useful' that's the most important in this regard.
@@dominikvonlavante6113 It's not magical, and they've been doing that for over a year now. You can create a "textbook" of high-quality synthetic data from one model and train another model on it, and the model will get better, not worse. The key is filtering the synthetic data well, and using the additional examples to reinforce that type of problem solving.
@@dominikvonlavante6113 Eating your own shit does not make you stronger.
"They have already sucked the entire Internet dry for training GPT 3.5 and GPT4o. The great hope is that somehow, magically, the output of O3 can be reused as synthetic input for generating more data."
The plan is probably to teach ai to navigate and problem solve by simulating it embodied in virtual environments
and then it can be it's own source of "data" when it can then effectively traverse real environments efficiently from simulated experience
that and test time compute can help
as well as turning bench mark problems into virtual practical problems that the virtual agent can itirate on until
imo we already have all the necessariy ingredients to solve AGI, they just have not been combined and then put in the oven yet
Sabine, thank you for doing these videos. We want more of these videos where you bring us objective truth. More of these topics, please!!!!
AI might be able to do what a human can do a lot faster or even better.. but AI cannot do what human's can do in terms of us being creatures that stabilize the instability inside, hence an unpredictable but intelligent beings.. if some day the AI whatever level it is at starts asking the question "why?" without being programmed to do so, that day i will take it seriously.
However, I hope it doesn't start with the question: "WHY is THIS species on this planet?"
After pouring billions into the AGI hype, they finally discover that the cheapest form of intelligence available is... human intelligence.
For plenty of cases that is already not the case.
Human intelligence is actually a lot more expensive unless its o3 on high. Human intelligence needs a wage (Large one if its especially intelligent and knowledgeable) Ad needs food water housing entertainment etc. Add all of it together and you get a quite the sum.
@stagnant-name5851 What's the point of money anyways when all work is done by machines. Nobody's gotta pay them, except in Energy, which by then is probably Fusion Energy anyways, maintained by, you guessed it, machines. Stuff would just be available to anyone anywhere. Every human could get the best of everything available, because there is no point in making a less good but cheaper product, because nobody would have to afford it, since nobody has to get paid.
The transistion is gonna be long and quiet painful. You keep learning doing new Jobs that will be replaced by another AI and another Robot. Then with less and less Jobs, you'd even have to find some sort of Job.
@stagnant-name5851 First of all, companies that sell AI services, like OpenAI, are still far from being profitable. This in itself is a critical hurdle that must be overcome in order to secure the future of AI. But beyond profitability, there are deeper technical and societal aspects to consider.
The intelligence of an AI system largely depends on the number of users interacting with it. As more users engage with the system, its overall performance can decline. This is why there's a continuous effort to increase AI's processing power. For AI to operate at the level of an average human while serving millions of users, the energy demands would surpass our current technological capabilities-at least for the next 60 years. In essence, AI is a tool, much like a computer, that helps humans boost productivity. As these tools become more advanced, they pave the way for new jobs and make tasks that were once expensive or difficult much more accessible and affordable.
One potential outcome could be a scenario-either utopian or dystopian-where AI is capable of performing everything a human can, making human labor obsolete. In such a world, people might no longer need to work and could focus on pursuing their passions, free from financial constraints. Money could lose its relevance entirely. With AI handling everything from physical tasks to intellectual work, society might shift toward creativity, leisure, and personal fulfillment. However, this vision also brings new challenges and questions about human purpose and the structure of society.
The most significant obstacle to creation, however, is energy. This is a fundamental law that governs all of nature. Nature has achieved remarkable efficiency, evolving systems that operate with minimal energy waste. It takes us decades to even come close to this level of efficiency with our current technology. Whether we’re trying to replicate biological precision or achieve higher energy output with fewer resources, we still have a long way to go in closing the gap between human technology and the inherent efficiency of nature’s design.
In conclusion, despite all the advancements and hype surrounding artificial intelligence, the reality may be that the cheapest and most efficient form of intelligence remains human intelligence. While AI has incredible potential and can revolutionize productivity, the energy costs, technological limitations, and complexity involved in replicating human-level cognition are immense. Nature, through evolution, has optimized human intelligence over millions of years to operate efficiently with minimal energy consumption. Until we can significantly bridge the gap between human ingenuity and technological efficiency, human intelligence will likely remain the most accessible and cost-effective form of intelligence.
May be agi can solve problems with quantum theory and intutional mathematics or the renewable energy with green H2 problem
Sabine, I trust you more than 90% of the "science" channels on RUclips. You have scientific skepticism, and use titles that aren't just click bait. The rest are like "Omni" magazine, a "science" magazine that was sensationalistic with very few facts. Keep doing what you are doing! Science isn't about "knowing", it's about asking the right questions and finding out where they lead. Sam Altman is an under-40 guy that talks with "voice fry". NOT reliable. These people never admit that our brains don't function with mathematical algorithms! Has a computer ever independently developed Newton's laws of motion?
I think you mean vocal fry, and I guess you don't trust anyone in Finland since they all use it. Your thinking killed Galileo but ultimately couldn't hide the truth forever. We are not special in the universe.
@@PeteQuad What ARE you talking about? Never mind. I'll wait for AI to explain it. May Lord Von Daniken grant you peace.
1 vote no AGI in 2025, but usable LLM and ML stuff - specifically related to key verticals. Example, data science - the current LLMs already have people throwing data sets at them and the market for data science work on upwork has dried up. So too Graphics Design - Customers are already generating their own graphic design work and the field is suffering. Translation is another field that will be affected, as LLMs are great translators of documents - even hand written documents that you only have a blurry picture of. So. Usability yes, but AGI, no.
Exactly
I've been saying for years that AI will basically be able to take over every task that is accomplished by a human in front of a screen.
If we cut the distance to AGI in half with each next model, how long will take us to get there?
its finite if the time between the releasing of each model is half of the previous one
NSS (Natural Specific Stupidity) will beat AGI in all possible senses. I can tell you this looking at my neighbors.
Yeah, just like in chess. A grandmaster can't win against a noob, 'cause they'll go all crazy on 'em! Being smart isn't useful! \s
One who gives a very good explanation about what an AGI is, (one definition) is David Deutsch. He would probably say someting like, a Chess program, however good it is at playing chess, even if it can learn from its mistakes, is not an AGI. It lacks creativity and can hence never create new knowledge, even about chess. It cannot step outside the algoritms it follows. It doesn't know it plays chess and it cannot choose not to.
An AGI however would have creativity and could say something like I don't want to play chess, I want to do... something else. Creativity is, I think, a main parameter in an AGI. That doesn't mean an AI isn't vastly better than any human in many areas.
If we could write the program we have in our brains into a computer system, it would be an AGI.
If we ever create an AGI (far into the future) it will probably be different from the human program, but it will have creativity! Otherwise it just follows instructions.
A common misconception is that if an AI gets good enough it will eventually turn into an AGI. That is wrong. It's a completely different program.
An AGI is the opostie of following instructions. It's a different kind of program that we as yet have no idea how to make. But it will have creativity!
With all respect to other definitions.
None of the LLMs out there can write a simple program to make OpenGL 3D rotating cube. But DeepSeek can after 10 prompts to fix its mistakes. 😀
The reason I think AGI is already here is how smoothly you transitioned from how AGI is just going to spam us with ads to an ad for Nord VPN
Chinos pirating stuff again and claiming it was their invetion lmao
you'll still buy the products
its ingrained in their DNA (i don't mean it literally just in case)
AI models are already smarter than most people in most aspects. This should be obvious to anyone who's used an LLM in the last year. The question is: when do they become smart enough to automate AI research? When that happens, well, get ready for everything to change. Don't assume we'll survive that though. I'd prefer a more cautious approach. Let's not build things that can outsmart humans.
Imagine that we were limited to learning from essays of 8 year olds. At some point human data will be insufficient and self thought and exploration are the answers. This is what in the field is generally labeled ‘search’ and often techniques involve RL like AlphaGo did.
I think the obvious answer to when computers achieve AGI is when they start generating new ideas and insights that no human mind has thought of before - in every field - on it's own without being prompted. When it goes from a regurgitator of data into a free, creative thinker. That's the standard for humans, should also be the standard for machines.
That's not what AGI means. You are a non-artificial AGI even if you can't come up with important insights. AGI doesn't mean "superhuman capabilities". "Without being prompted" is just a whim imo. A system that can only reply could totally be considered AGI if it replies well enough.
LLMs don't regurgitate data, they don't work like that. It's surprising the amount of people that still continue to misunderstand how these systems work.
@@MrTomyCJ My standard of AGI is different than yours, i'm not wrong, we just disagree. The smartest people in the world cannot agree on this definition, there is no consensus, so though you may like to, you can't speak from authority. For me, the obvious standard is something that's a long, long way off. For you, the definition is something that is achievable possibly in this decade, and may have even happened already. I never said "superhuman", I said it should have, at least, the same standard as human thought - spontaneous creativity and insight, without being prompted.
@@sabloff Yeah, there are almost as many definitions of AGI as there are people who use the term. One very important objective threshold is "Seed AI." That's the point at which a system (whether it's "true AGI" or not) is able to autonomously do AI research and build an AI that's even better at AI research than it is (or improve itself directly). Once that threshold is reached, recursive self-improvement occurs. Whether it would be a "hard takeoff" / "intelligence explosion" is up for debate. It might get slowed down at first by the limitations of experimentation and (hopefully) human oversight. But if it's allowed to continue (or if it escapes oversight in order to continue), a superintelligent agent will be created, and the world will belong to it--not to us.
I have an Adjusted Gross Income every year. I am glad you spoke what acronym at the beginning.
Fuzzy Logic > Neural Nets > Machine Learning > Deep Learning > AI > AGI. Yep, there is a technical evolution across this alphabet soup. However, following the money also offers another explanation.
AI doesn't mean "very smart", AI is any system that simulates intelligence to some (almost any) degree. Even videogame NPCs are considered to be a form of AI.
The deeper question is: Why had all the technology not allowed us to live a life like a lord of bygone years dedicated to the pusuit of art and knowledge, free of the anxieties of worrying about making ends meet?
In the same breath that Tech Bros talk about AI, they also talk pro-authoritarianism, cutting back the benefits of the state for the masses and about the need for the many to suffer.
Because capitalism.
Shrinking the state is "pro-"authoritarianism? Huh. I always learn something new from RUclips comments.
Shrinking the state means even less tax for Musk, so he becomes even richer. To achieve this, he is currently buying political power so your vote is worthless. You heard it on RUclips @SnapDragon128
AGI being far away sounds like good news. I have a love-hate relationship with AI. The current models are useful but unfortunately people have started abusing it and spamming it smh
Yeah, let's waste Tera-Watts of Energy on useless prompts, when humans could do that just to win an arguments. Where are all the environmentalists when you need them.
they will come, when sh*t really hits the fan.
came to add to this: Yeah, lets waste tera-watts of energy on a system, to endup with a good looking answer that just doesn't work or makes any sense, lol.
Its already my life using AI and some agents: he goes and he does it, everything looks good, but then, opsie, its actually wrong, either we have to fix it, or start over.
Now put that to scale, like using it for bio-engineering or something, as a company you say, lets go! burns god know how much money, to have an answer that: looks good, no one actually understands why its wrong, but it is wrong anyway.
Looks like a bright future! lol
I can see some arguments strengthen on this, humans might actually become even more valuable with AI/AGI (which is not yet AGI).
Humans + AIs is the answer, we are already very efficient, plus efficient AI, nothing can beat it (cost-efficiency wise)
My definition of AGI is simple: The one that can acknowledge its uncertainty by saying "I don't know" rather than generating false responses through hallucination.
Yeah... some kind of awareness of how cofident it is in its own answers. LLMs have none of that.
@@DaemonJax heres more text... howd i do?
So, you don't know?
That's not hard to make, actually. But it won't be 'AGI' either.
Please don't undervalue delusions and deception. There are four horsemen of insanity, perception, hallucination, delusions and deception. As I think about it those four concepts are also the foundation of institutional valuation, and perhaps cultural valuation also. (this is an attempt at humor and it came out too dark)
This is very down to earth and anecdotal, but I was using several text-to-image models yesterday to produce something very simple: a wide and long bamboo roller blind that is used as a photo studio backdrop - meaning it (a) fills the full image background and (b) extends onto the floor. Didn't matter which model I threw at it or how I phrased it, none of the models could let go of the associations it was trained for. What I asked 'simply didn't exist' (read: it didn't have enough data samples to create a pattern), so they all insisted on giving me what I wasn't asking for. In the end, to solve such a case, either you have to jump through hoop after hoop, building made to measure hacks through the problem, or you have to train a (sub)model specifically for that task. While an actual intelligent model would just understand the logical concepts used, and produce the image. So no, AGI is not around the corner just yet. Today, the joy AI brings me is on par with the frustration it causes. That is NOT a good business model 🙂
As long as they can't answer the question "What do women actually want?", there is no AGI.
Humans are a form of AGI. AGI doesn't mean "superhuman intelligence".
@@MrTomyCJ No but it means GENERAL intelligence.
I'm once again of the opinion that what's missing is the rather human tendency to guess, hallucinate, and fake it.
Nah, what's missing is any understanding of the problem they want to solve. It's like self driving cars. When Sebastian Thrun et al demonstrated automating some of the mechanics of driving a car, and created a course on Udacity it was clear they were all deluded that they'd more or less solved the problem and they fancied it wouldn't be long before our roads were full of self-driving cars. At that point Google and a bunch of both tech and car manufacturers started throwing money at development. Most of them were perhaps a little more sane than Altman, barring perhaps the other delusional stock bubble idiot Musk who defecated complete and total nonsense about how soon Tesla would have full self driving.
And where are we? We're no further forward. The more we've spent on self-driving cars the more it's clear we didn't understand the problem and the further away from full self driving we are. That its looking likely that you need AGI or something of that level to be able to automate driving on roads with human drivers. That human beings are bringing far more to driving around than just a bit of muscle memory to wiggle the controls and being able to analyze what they are seeing outside the windscreen.
Well AGI and AI is basically at the stage where self-driving was - it's at the stage where they are drunk on the kool-aid and deluding themselves that we're nearly at AGI - but we're not. We don't understand the problem let alone have any clue how to solve it. Specifically the important thing to note is that, just as riding around on a horse wasn't a step of the way towards inventing the car, current AI is not a step towards an intelligent machine. Whatever you need to do to make a machine think, have intelligence, be self aware etc etc is an unknown - we don't even have a good grasp of how this works in people and other living species. We can barely define some of the terms. Specifically too, the idea that if you throw a few billion at a company with AI in their name they might discover it makes no sense. There's no evidence they have any idea where to start to solve it. All the evidence we have suggests that they are just riding a wave of hype created by a computer system that, at great cost, can spit out plausible looking bullshit in response to text prompts.
that's pretty interesting, you might be onto smth here. We're working on smth called daydreamProtocol which will allow agents, like looped LLM's, to play all kinds of complex games.
@michael1 it's ugly and regrettable that your comment denigrates someone you don't know at all, so you can make a partially correct statement in a subject you aren't expert in.
@@aaronjennings8385 If you saw something ugly aaron you were probably looking at a mirror. There are no "experts" in agi because it's something that doesn't exist and no one knows how to create it. Anyone claiming to be an expert in AGI would be as stupid as someone in the 1700s claiming to be an expert in internet networking - but not quite as stupid as the morons throwing money at him because they think he's an expert.
The ad model does not seem to work for LLMs - They are just too expensive to run ... Maybe that will change but currently it does not cover the costs.
Seriously, you have two sources to oppose tens of world experts, one of which claims that AGI will arrive in the next 6 years at most (so not 'never ever' like you baselessly predict but very soon actually. Just a little longer than others.), and the other is known for being a baffoon talking about AI with absolutely no knowledge regarding the matter and a track record of wrong predictions dating back to almost 10 years!
I know you can't be fully unbiased but that much is ridiculous and unprofessional!
Honestly. LeCun hasn't taken responsibility for how much he has misled people, either through a wrong model of the world or through malice. "AGI never" hasn't been a serious position in at least a year or two, and "AGI 2027" is starting to look realistic (or already did, if you did the math and updated your expectations accordingly). More than that, most AI researchers say that there's a reasonable chance AI could drive humanity extinct this century (or maybe this decade).
Center for AI Safety (CAIS) Statement on AI Risk:
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
A global priority? What? Who on earth would sign such a thing? Turns out, most of the top computer scientists in the world, and even the leaders of the top AI labs! We can't let them create an uncontrollable technology, and the org PauseAI has an actual plan to stop this from happening.
Not even close to AGI.
Prof. Włodzisław Duch, polish physicist, is specializing in applied computer science, neuroinformatics, artificial intelligence, machine learning, cognitive science and neurocognitive technologies. He was researching that topic for decades. He is quite confident it is only a matter of time (or actually: resources scaling).
1:20 I don't know how to solve this puzzle. I might be an AI.
Look again carefully, at the images, the shape that is present the most times is shown in the output.
We should focus more on AI safety. Civilization ending catastrophies caused by a bad use of AI do not require AGI.
AGI may lead to something we cannot stop or control any more, but so can a bad use of 'regular' AI.
So the relevant questions are: how we stay in control (with future and current AI) and who exactly stays in control (democratic institutions or some totalitarian system).
We treat AGI as the most important milestone in this race, but we can create uncontrolable problems much sooner.