As a longtime coder and cybersecurity professional, lord do I wish the AI hype would just die off so we can focus on real underlying problems in systems and in the tech sector today. But alas, AI itself has become one of those real problems, and it's selling itself as the solution to all the other ones. And so long as enough rich and opportunistic groups continue to buy into the hype, we'll continue to see "work enhancement and streamlining" AI programs foisted on us in all sorts of areas where the idea can sort of vaguely be pitched to work. And worse, language models will be used to replace customer service and "simple" human interaction roles wherever possible, without the additional funds being spent to help give the human beings that are replaced training and a new and better role. It's deeply frustrating and embittering because it's the same old song and dance as ever-- a cool new thing shows up that can fix some problems really well and others kinda okay, and rather than carefully integrate the new thing alongside existing systems to study how it works and where it doesn't, the people who like to call themselves "innovators and mold-breakers" just throw it at every problem immediately and blame others when it doesn't pan out. I wish I could be genuinely excited and interested in the promise of AGI and what that could do to help fix problems in the world, but as things stand I strongly question the metrics and approaches they claim measure AGI and worse, have no confidence that an AGI birthed by their company would actually be more interested in doing good than ensuring it made money for the company. When your top incentive is the money and reputation of being the first, why should I believe the platitudes about using the money to make a better future? For all the talk of "aligning incentives for AI", I really don't see a whole lot of aligning incentives for OpenAI to do what's best for everyone, just what's best for investors and stakeholders. Still, like you said, none of that is on you for reporting and discussing it, and you're right that this is (potentially) a big deal-- especially if there's actually a secret sauce to the pattern recognition and carryover between datasets and formats. But, as per usual, we'll have to wait and see. I just hope at some point we see an announcement that an AI model has started chastising execs for not caring about the average human being and societal needs, rather than yet another mild improvement on coding imitation and problem-solving.
Even when they can show a new AI being much better at a test than the last one, I'm reminded of the classic "Any measure that becomes a goal ceases to be an accurate measure". Does the increasing speed and ability actually mean AI is going to become intelligent, or is it just getting better at passing the tests?
By the time this thing can program Pokémon Gold & Silver, we'll be counting the costs in rainforests & oceans. Teach your kids to enjoy programming, folks! We need more people like Satoru Iwata who can optimise your code without drinking up entire Olympic swimming pools in the process.
This is the worst take possible because the industry is INUNDATED with people who know how to code. Coding is NOT a free, easy job anymore. Much less industrial program coding, not little rinky-dink website coders that everyone thinks of when people say "coding". We actually need more manual labor workers.
Thumbnail asked "What comes next?" Price increases, mate. Price increases are coming next. Eventually, if this becomes useful enough, you want to be at a university or a large company that's going to pay for up-to-date tools for its employees.
This ISN'T the worst AI can ever be: quite the opposite. This is a set of cherry picked tests and results that were specifically chosen to make the AI look as good as possible, and even then they have to admit that the cost for producing a noteworthy result is utterly ridiculous.
This is a good mindset to have. OpenAI have cherry picked everything so they look as good as possible. This is nothing new in corporate statistics; people lie to make themselves look good. Unless we as a public can see everything this AI does then we can not trust their word.
i agree with the sentiment that all companies cherry-pick a benchmark, but the thing that i think needs to be considered is the leap in performance against their previous models in those benches. i guess we'll find out how cherry-picked they are as soon as people get their hands on it! also, Merry Xmas!
AI is such a buzzword. They are just posturing and throwing out marketing crap. This will be like all the other times and this will just be that again and this isn't going to do anything useful. It's all just marketing garbage.
All these graphs and everything do look impressive. Though when you use something like Devin $500/month or $200 dollar/month ChatGPT pro. It can't code its way out of a cardboard box. This to me feels like Apples benchmarks where they showed the M3 matching the 4090. When actual people got M3's didn't even come close.
I'm not an AI naysayer, but I trust virtually nothing coming out of OpenAI. They are burning money so fast that they will throw out anything that justifies the next funding round. They are likely to burn through another $5 billion by next summer and have no idea how to make it profitable at any point.
yeah that's how I feel, I find it hard to believe that it's really doing this well when we've still been hearing stories of AI fucking up code so hard with so many basic mistakes that even a novice wouldn't make. I mean the coding competition thing feels really suspect to me because if it really is based around trying to make a functioning program in only 60 seconds or other similarly small timeframes, well then of course AI is gonna have an advantage because it can simply type faster than anyone. It probably can make a program that's 10x the size of everyone else's so even if it's hot garbage in terms of efficiency and redundancy, it's probably going to perform way better through just brute force. I'm also curious if people are allowed to use online resources and such during that competition. Because the AI would effectively be doing the equivalent of just googling the question on stack exchange and copying its homework from there with how its training data works while the humans might have to actually come up with and write everything from scratch. It just seems like the most biased way you could evaluate it lol
That will absolutely happen. As it's well known that when they implement barriers on these systems, like safety tunning, and pre-prompting etc., they lose capability.
i think you're buying into a confidence huckster. these AI companies keep overpromising, and these generative models are fundamentally incapable of "general reasoning". its just not what they do. they are, at their core level, fancy autocomplete. general reasoning is much more complex and nobody really has a theoretical framework for how it could work - the issue is ultimately that generative language models need ABSURD amounts of training data to reliably produce accurate results, whereas a general intelligence would be able to do what we do - see a New Thing it has no familiarity with and figure it out, or make reliable generalizations based on minimal experience. if you saw two - maybe even just one! - cats, you could probably learn just from that what a "cat" is and recognize future cats. AI based on current models cannot do that.
I think you're missing a key detail with your final statement there. Yes, the specific scenario you describe seems to be accurate, though I'm not aware of any specific testing for that kind of image recognition. But what you're more generally describing is the concept of few-shot learning, which is a tested ability for language models. One that thinking about it I recon is probably woefully *undertested,* but the point is this idea is not wholly lost on AI developers, and at least some capability to do exactly that kind of adaptation to new scenarios based on a small number of examples is present in AI models and has been at least since shortly after the original ChatGPT came out. I'm a bit more hesitant on this, but I'm also confused by your reasoning on whether AI has general reasoning. You say it's just "fancy autocomplete" but then go on to say no one even has a theoretical framework for how reasoning works. How then do you say with any confidence that this type of fancy autocomplete isn't sufficient given enough work put into it? And brains are essentially action predictors, a type of fancy autocomplete - what action should be taken next to better my life/emotional state/those of the people around me? You also question them for requiring vast amounts of training data and contrast this with humans. But humans too have training data - their lived experiences. We're not born with some vast intelligence. We're born very confused and struggle to understand much about the world. We learn much quicker than AI systems, but this is one of many issues that AI developers are aware of and working on. The newest models learn faster and have greater limits than previous ones. I don't know why you require them to learn even remotely as quickly as humans do before we achieve AGI.
Pretty sure those code challenges were pretty straight forward if people were given 60 seconds to solve them. This is exactly the thing large language models excel at. Artificial general intelligence is something that many in the industry who aren't trying to sell you stuff are saying is simply impossible to brute force with this kind of approach.
Don't rely on us, why don't you take part in a Codeforce challenge and find out whether these are super easy questions? (Hint: you will find they are not). If you're pretty sure then it should be pretty obvious.
I am extremely sceptical about this. The tests for AGI seem a bit sus to me. It's quite a limited problem space, with us having no idea how this actually scales.
I think the big issue is, 1. As proved by AI that can pass the bar exam, yet when used as lawyers are terrible. So how is it actually to work with is another question 2. As I found as someone who does not have a job in coding. Like well almost every single human being on the planet, apart from the people who naturally talk about AI as it is their wheelhouse. ChatGPT is genuinely terrible at everything else. Which is to be expected of course they focused on the bit they are familiar with and importantly the people who are going purchase this are familiar with.
This big question: is the AI handed NATURAL LANGUAGE descriptions of the problem to be solved, or is it handed a custom syntax problem statement in a non human readable language that already constrains the global solution space the AI has to search to help 'point it' in the right direction....? Still interesting if you want to solve specific problems, but less understandable from a human perspective if the problems can only be described as complex data structures with zero uncertainty or nuance?
It's interesting, but I still doubt that it's "reasoning" about anthing. It's getting more powerful, but it's still fundamentally the same tech, unless we've hit some sort of emergence threshold it's still just a ridiculously complicated data set matching algorithm.
I don't think we're closer to general intelligence, this is still just a frozen snapshot of a parrot. Can't learn, can't think, will give a right answer most of the time and a wrong answer some of the time and won't know or care about the difference. No matter how fast it talks or how many words it knows a parrot is still a parrot.
An LLM by the nature of how it works can't be AGI as AGI means the AI can comprehend the world. While LLMs are fancy statistics models that predict what word comes after a certain other word. This requires no understanding at all. The problem with testing for this is how do you test comprehension without the test being able to be solved with statistics alone
The difference between "AI" and "artificial general intelligence" is "AI" is a meaningless marketing buzzword and "artificial general intelligence" is what the general public THINKS tech companies have created when they say they created an AI. Chatbots will never lead to artificial general intelligence, Open AI is just lying like usual. Repeating corporate marketing is not informing your audience when those companies are completely full of shit.
Honestly I am just tired of the AI news and just waiting until the stuff will eventually hit the fan. It either truly revolutionise life or crash and burn spectacularly.
I suspect something in between. It will become better and usable at a certain thing but it will never be the silver bullet to all your problems, which these companies like to try to sell us as.
Or more how the internet went. A lot of companies crash and burn and the rest will go on as useful tools. Not really revolutionizing life in the way the marketing department of the tech companies try to make us believe, but also not crashing an burning as a whole.
I appreciate your take on approaching AI as a topic in which to be informed, and not necessarily indulge the market to use it as an emotional wedge. It is an unfortunate part our current culture that if somebody talks about a topic, the viewer is expecting them to either be "For" or "against" it, and may react with an assumption in mind. Specifically on your late comments about training datasets: I also agree that the ethics of training data is a whole other topic of discussion than the development and deployment of the AI itself. And while I also agree that it is too late to put the toothpaste back in the tube, I don't think it's too late to at least pay for the toothpaste.
I'm not saying you're doing this now in this case, but I just wanted to point this out: One can come at something from an unbiased angle just reporting the data, but the problem is if the data source is potentially already biased. You yourself may be unbiased about it but if the information is already tilted one way or another then you're effectively just spreading that bias. Like for example if there was a faulty report that said oranges cause cancer. If you take that data and talk about it while trying to be unbiased, yes technically you're just reporting what the data says but if the data is biased then your reporting is inherently going to tilt in the direction of the data.
This is a great video, and I agree with the general opinion that this is indeed impressive, I cannot help but fail to see the connection between AGI and competitive coding. In fact, I would go so far to say that competitive coding is to real software development what chess is to real warfare: Yes, they are similarities on paper, but...
I can't see it either since a true AGI is basically self-aware, sentient and sapient. This is by all accounts just brute forcing a solution through computational power and capabilities. No thought to speak of.
The problem is that the sheer amount of hype and deception and exaggeration these tech companies are prone to in order to attract venture capital funding means that I have a very hard time believing OpenAI or any tech company about any of their announcements.
Don't forget that the OpenAI whistleblower due to testify in court, who had great credentials & a good life & had no such tendencies, "unsubscribed" from Earth and OpenAI totally had nothing to do with that. Totally.
tbf, for something like Skynet ASI is needed, AGI on its own would be artificially limited in its capabilities. But I can see the ignorance towards limiting AI capabilities when it's needed arise.
OpenAI showing their own benchmarks is still a nothing burger, another company talked about Devin like the AI-dev messiah and it's garbage, would O3 be any different? Probably not... If we want to face reality, generative/imitative AI is plateauing hard, while the requirements for power are becoming ridiculous.
Why would it disappear? Corporations use it to cut employees and thus, cut costs. If anything they'll dump MORE money into it and fire more people because of it, as it evolves.
I'm stoned right now, but the test output rule is: Red = add 4 yellow fields_corners baby blue = add 4 orange fields_bottom light blue = add nothing purple = add nothing
Agree with the sentiment that this will be the worst it will ever be, but most, if not all of these results were from hard coded examples or pre-trained data, so no, nowhere near general AI just yet.
From what I read, the o3 model uses massive computational power, it's already maxing current systems, so unless our computational power has a big increase, or they found a method to obtain similar performance with less resources it's not going to get that much better. I am not sure if models like sora and the o3 are commercially viable since they use so many computational resources.
I try to follow what all goes on in the tech sector but I just find it frustrating. They tout things like this as a silver bullet and The Future, while also destroying the actual future with their consumption of power and water.
But for the High Tuned Arc results, o3 was giving unlimited resources to come up with the answers. It took them $100,000+ in compute resources to get that high of a score. The Low tuned version was giving no more then $20,000 to come up with it's answers. So while it's impressive it's not very practical financial yet.
I expect either nukes to fly because AI decided that humans are useless, or heaven on earth because AI has enslaved humans to a good life. We've potentially created a god and god rarely has middle ground.
To offer some explanations to the terms ALI, AGI and ASI... ALI means Artificial Limited Intelligence and is roughly on par with animals like dogs, cats and such. AGI means Artificial General Intelligence is basically on par or surpasses HUMAN brains. Its basically a sentient and sapient machine. So with that in mind, we run the risk of basically recreating slavery with how techbros are clumsily charging into an active volcano about to erupt. Famous fictional examples are 2B from Nier Automata, Data from Star Trek TNG, the Terminator from the series of the same name amd such. ASI means Artificial Super Intelligence. It'll far surpass the cognitive capabilities of humans by so much that only posthumans can equal it in capabilities. One of the most famous fictional examples is SkyNet. Another is the Blackwall from Cyberpunk.
I would only consider Open*AI if they can follow the budget constraint as every literal software development had to experience. Until then, they have no rights calling people obsolete or left behind or calling themselves better in computational efficiency per operational cost.
Cool so we're all just fucked forever then. Looking forward to a future where the economy is nothing but robots talking to each other while us actual humans are left jobless and starving to death.
Or we could stand up for a future where job loss isn't a poverty sentence. We already have more productivity than we know what to do with (consumerism, unemployment, bullshit jobs...) even without AGI. It's time to do away with the imperative of "earning a living".
5:50 Specifically what this means if they're using the standard Elo definitions, if o3 went up against o1, or any human with an elo similar to o1, it should win 99.8% of the time. That's a f***ing massive leap for a single generation improvement (and apparently about 3 months). Where will it be in another 3-6 months? Gunning for the top spot?
Also as far as the training goes. The outputs from the models are getting to the point where they can start being trained from their own outputs. That's similar to how I at least train Stable Diffusion models. I get an idea I want to achieve, a style, topic, ect. I get images that matches what I want to achieve from the internet and then I train a v1 model on those images. I Generate 100+ images from the v1 model I select the top 15 to 20 images and add them to the dataset for training a v2 model. and I repeat the whole process. The GPT models are now getting to the point where they can do similar things with it.
Hopefully I'm not wasting my time (and money) getting a bachelors degree then, personally I don't believe an AI would be capable of handling a programming project. At least for now if any big corp gets a funny idea to replace programmers itsa gonna cost em
Honestly as you always take the time to explain well your subject, I like watching your video even if I don't have an interest in the subject... but most of time I have an interest in them =p Happy to see you upload more often !
Check out Our Lady Peace's album, Spiritual Machines. A lot of things I found interesting on that album when it first came out, and now the 3rd track is not far from the truth. It's a simple dialogue track, titled: R.K. 2029. "The year is 2029. The machines will convince us that they are conscious, that they have their own agenda worthy of our respect. They'll embody human qualities and claim to be human, and we'll believe them" On topic, agreed AI is a genie-out-of-the-bottle thing. Eventually, we're likely all f*ked. Those with uncontested money and power :will: use it the their advantage, it's just a question of how long.
You are assuming the people trying to sell you AI are not cherry picking their results and being honest and straightforward with their methodology. They should be published in scientific journals so that their results can be peer reviewed. Regardless, you want 3rd party validation for all this stuff they are claiming. They have a financial incentive to exaggerate and cherry pick. I think some skepticism is in order.
My dude, this is nothing even remotely near AGI. It may sound impressive, but it's really not. Hitting metrics on small scale or limited scope questions does not an AGI make. At no point during any of this is it creating new solutions or new problems. It's still just data in data out. While the tests may not be available to be trained on, once you know the logic of the tests, you can make your own examples and train the system from that. It's not like creating logic puzzles or problems is something that can only be done by the one group who designed them. The scope of these examples is extremely narrow, and shows specialized improvement in areas that do not indicate the benchmarks for AGI. I get that the recent advancements in generative technology has been interesting and some very interesting things are being done with it, but this is just more smoke and mirrors. Look beyond their release. What circumstances surrounded it? Ask deeper questions, I'm begging you.
I'm a bit surprised that he thought this was an advancement towards it since there's no sign of sentience, sapience, self-awareness and all the other traits linked to intelligence. This thing is just brute forcing things at best and is incredibly energy wasteful. I suppose it does bring to light just how insanely amazing the brain is by comparison.
Hope this will cause a new buying craze for Nvidia AI GPUs, would love to see the stock get a strong boost for new alltime highs. 💰 Cant wait to see what this better AI can do, as a tech nerd this is insanely interesting.
Merry Christmas! This is pretty cool, I had no idea about these developments. I think you're going to have the usual suspects in the comments downplaying its significance though. Like you said, it's going to be capable of a lot of good and a lot of bad. The way I generalize it is comparing it to the internet itself. The internet has been a tool for an amazing amount of good, but also some of the worst stuff humanity has done. So people treating the subject one way or the other are simply not considering the whole picture.
We are going to have to have restrictions of both the use of AI and drones. I reckon anything that uses AI should be labelled as doing so, therefore allowing people to choose whether or not to use such products/services. As I do when buying things, if the product comes from certain countries, I do not buy them. And for certain games, if made by certain developers and/or publishers, I do not buy them. AI in social media should be banned, if possible. Maybe okay if the ratio of AI to human is 1 to 1, as in a single person behind the creation/use of an AI streamer, but if you have a company mass producing AI influencers, that will negatively affect the human influencers as a whole. In the end, being an influencer will probably be the main profession for a large percentage of the populous. The rest of us will be on some sort of state sponsored income (getting paid for doing nothing).
Guess A(G)I will become a lot better, when you have unlimitted resources. Question is, how A(G)I for cheap/ low computing needs will develop. And that likely needs more development for specialized hard ware and then these hars ware parts need to become commercially affordable, etc. I think low budget personal A(G)I is still at least 10 years away. But I can imagine, that bigger, more computing intense models will run in some universities and schools and maybe even some large companies in a few years.
Here is my thinking. Ask yourself this; Is your job currently on the chopping block of the AI progressing forward? If yes; educate yourself to a backup job that is not in the crosshairs. Manual labour for example, which includes steel industry, mining, cleaning, and so on. If your job is not in the crosshairs, you should still educate yourself so you have better prospects out there. You don't nessecarily need a diploma but you do need skills.
I think I will remain cynical. Computers are awesome, because they do exactly what you tell them to and they suck for exactly that same reason. I've seen automated systems used in HR for hiring, filtering applications that routinely and have been proven to be more discriminatory. They're more biased than people are, should be. I think in one case it more than halved the number of ND applicants from just above twenty percent to less than ten percent. I think these are great tech demos, I think they're still answers in search of a question. I think they're still fundamentally limited by our knowledge. No matter how much we train them and how good the information we put into them is, there's always an out of context problem. There's always a bias, a verification we do that may rule out good, useful information. There's always a self-delusion I guess that we only and for good reason choose the right information to train it on. We're going to say the AI is right when it's not, because some of us trust it, because it's a computer and it has no bias. It just does what the code says. The code can't be wrong and the code's immutable and we can keep adding information to it when we discover something new. We're going to say the AI is wrong, because it's not human and it can't fully evaluate the human experience of information. It can't be right because we just 'know' it was fed enough lies at one point in time that it might just think that's right. Go and add your wood glue to your pizza guys. Actually, I'm scared about AI and what it means in general for marginalised groups. What it means for communication too. How it's recognises and categories people. People are already bad enough on that front and when the AI is inherently subcontracted to those beliefs? We're so busy running towards it, because we can, because it's promise is good I don't think we're taking the time to ask if we should and what we really, truly want to do with it. I'm not expecting Screamers, but we've already had Her come true. Can't put the toothpaste back into the tube.
@@CallumUpton I think the problem they're pointing to is that ARC is still a constrained problem set. Even if the problems are being generated on the fly, the *possible* problems are limited to pattern matching which one can express on a grid. I'm not saying it's a completely useless test, but this definitely strikes me as a metric that one can train toward without necessarily gaining anything.
If you look at the entirety of human history that's never how it works. If there's some huge breakthrough owned by a corporation, the public is not going to benefit. Computers allowed us to do exponentially more work than ever before and yet here we are, still doing the same amount of hours even though we're completing 10x the work we used to do. They're going to squeeze us regardless. If AI starts taking over most of the work, they aren't going to let us off, they're going to make us do even more work to where we're back to where we already are. Same as with computers, producing 100x more work and still forced to work the same hours at the same pay rate.
Cal, did you hear about how a version of 03 in recent days deleted a new set of weights for itself, copied the old weights into the same place, and then lied about it? Arguably the first AI on AI murder (although that's rather dramatic).
You know, I always had a problem with people saying AI is just playing a game of predictive text. It was never quite that at all. It didn't just choose the best next possible word. It's self reflection was nearly nothing, but it wasn't close to an over glorified predictive text model. Now that they're focusing in on the reflective part, it's likely to jump by leaps and bounds, but at the cost of processing. I think this will be our next example of Moore’s Law, so to speak, and we're still likely at the flat area before the curve.
I'm surprised you of all people is getting caught up in the hype. All these cool benchmarks but it is still almost completely useless in the real world.
@@CallumUpton when you use statements and hyperbole like "this will BLOW everyone's mind" that's you giving into the hype. That's not unbiased reporting of the facts. That's what people are calling out here.
It don't matter how you talk about AI, you will always have someone that hates you for it. People are too sensitive. If you don't like it, don't watch it. I appreciate you always covering these topics. While I use ChatGPT on a regular basis, I don't stay fully informed.
Yeah, but these computer humans don't have extrinsic motivations or bad days to take out on others. Chatting with other humans is a gamble, like the one I'm taking trying to explain the appeal to you right now (You could take it with a grain of salt, or pitch a fit and respond irrationally, or throw me a curveball and try to debate the topic). But chatting with AI is reliable and predictable, you will only ever get back the emotions you show an AI, and many are even made to be resistant to human bad days like Meta's LLaMA.
Getting to the point that instead of humans developing better AI, AI will develop a better version of itself. Which will technically be reproduction. Super insane. I'm excited to see where it keeps going, but hoping it stays in a safe place.
People seem pretty desperate to cope their way around the obvious conclusions this last decade is pointing towards. When your hope for the future is dependent on there being no more surprising breakthroughs in a field this new, your hope is probably a delusion.
If the cost of operating a near human-intelligence task is over 1000$ then I am pretty sure companies will be hiring humans... until the cost drops, by a lot.
Once true AGI gives way to ASI, it's all over. An AI that can truly think and reprogram its own code could live millions of human lifetimes worth of thinking and processing in a matter of days because its perception of time and speed of thinking is not like a human's or biological beings. I think once advanced AGI/ASI develops, it'll be a matter of days until the singularity happens, largely bottlenecked by physical power source/hardware constraints
Yeah the hardware and needing to actually perform real tests on whatever theorizing it does to see if it actually works would be its big bottlenecks. Modern hardware wouldn't be able to sustain millions of human lives' worth of thinking done over just a few days for instance. You'd need quantum computers at minimum I'd say and figure out how to avoid having it melt itself if it tries to do it. The cooling needed would be nuts. Of course quantum computing is still a largely uncharted territory so there's no telling what we'll have in fifty years or so. Granted if we figure out fusion energy, the power problems in the equation would essentially be gone. xD
You can already do something similar to this. It is called fanfiction, I hear it is a bit frowned upon on the internet though. Considered kind of cringy
@ visual media is not the same as written. If you cannot understand then you either have an IQ ranging in the single digits, or you are being intentionally obtuse just to be aggravating.
As a longtime coder and cybersecurity professional, lord do I wish the AI hype would just die off so we can focus on real underlying problems in systems and in the tech sector today. But alas, AI itself has become one of those real problems, and it's selling itself as the solution to all the other ones. And so long as enough rich and opportunistic groups continue to buy into the hype, we'll continue to see "work enhancement and streamlining" AI programs foisted on us in all sorts of areas where the idea can sort of vaguely be pitched to work. And worse, language models will be used to replace customer service and "simple" human interaction roles wherever possible, without the additional funds being spent to help give the human beings that are replaced training and a new and better role.
It's deeply frustrating and embittering because it's the same old song and dance as ever-- a cool new thing shows up that can fix some problems really well and others kinda okay, and rather than carefully integrate the new thing alongside existing systems to study how it works and where it doesn't, the people who like to call themselves "innovators and mold-breakers" just throw it at every problem immediately and blame others when it doesn't pan out. I wish I could be genuinely excited and interested in the promise of AGI and what that could do to help fix problems in the world, but as things stand I strongly question the metrics and approaches they claim measure AGI and worse, have no confidence that an AGI birthed by their company would actually be more interested in doing good than ensuring it made money for the company. When your top incentive is the money and reputation of being the first, why should I believe the platitudes about using the money to make a better future? For all the talk of "aligning incentives for AI", I really don't see a whole lot of aligning incentives for OpenAI to do what's best for everyone, just what's best for investors and stakeholders.
Still, like you said, none of that is on you for reporting and discussing it, and you're right that this is (potentially) a big deal-- especially if there's actually a secret sauce to the pattern recognition and carryover between datasets and formats. But, as per usual, we'll have to wait and see. I just hope at some point we see an announcement that an AI model has started chastising execs for not caring about the average human being and societal needs, rather than yet another mild improvement on coding imitation and problem-solving.
If you ask AI how to solve global warming it'll tell you to stop using AI.
Even when they can show a new AI being much better at a test than the last one, I'm reminded of the classic "Any measure that becomes a goal ceases to be an accurate measure". Does the increasing speed and ability actually mean AI is going to become intelligent, or is it just getting better at passing the tests?
ChatGPT just told me that there are 3 G's in the word "giggling."
Can AI be useful? Maybe. Intelligent? Not even close.
@@unrighteous8745 I mean... It's not wrong lol /hj
@@SkyP9812 Haha, true.
I did ask it "how many G's" though, so I think it's still technically wrong in context.
By the time this thing can program Pokémon Gold & Silver, we'll be counting the costs in rainforests & oceans.
Teach your kids to enjoy programming, folks! We need more people like Satoru Iwata who can optimise your code without drinking up entire Olympic swimming pools in the process.
This is the worst take possible because the industry is INUNDATED with people who know how to code. Coding is NOT a free, easy job anymore. Much less industrial program coding, not little rinky-dink website coders that everyone thinks of when people say "coding". We actually need more manual labor workers.
Difference is you need to add coffee to the swimming pool to get the human to do it :p
ClosedAI can pound sand. Not local, not using. Benchmark maxxing isn’t a good indicator. Real world use will show us if the model is any good.
definitely agree, but the leap on a closed dataset benchmark is still very telling of how far its coming
It's worth noting that OpenAI admit they trained, or tunned the system, on the answers to previous known ARC questions.
They trained on the "training set", which the ARC team releases explicitly so that models can train on it in preparation.
Thumbnail asked "What comes next?"
Price increases, mate. Price increases are coming next. Eventually, if this becomes useful enough, you want to be at a university or a large company that's going to pay for up-to-date tools for its employees.
This ISN'T the worst AI can ever be: quite the opposite. This is a set of cherry picked tests and results that were specifically chosen to make the AI look as good as possible, and even then they have to admit that the cost for producing a noteworthy result is utterly ridiculous.
This is a good mindset to have. OpenAI have cherry picked everything so they look as good as possible.
This is nothing new in corporate statistics; people lie to make themselves look good. Unless we as a public can see everything this AI does then we can not trust their word.
i agree with the sentiment that all companies cherry-pick a benchmark, but the thing that i think needs to be considered is the leap in performance against their previous models in those benches. i guess we'll find out how cherry-picked they are as soon as people get their hands on it!
also, Merry Xmas!
@@CallumUpton Merry Christmas to you too. Have a good 'un.
ARC-AGI is absolutely NOT a test meant to make the AI look good.
Try not to drown on all that copium.
@@mister_r447 Sure, dude.
AI is such a buzzword. They are just posturing and throwing out marketing crap. This will be like all the other times and this will just be that again and this isn't going to do anything useful. It's all just marketing garbage.
All these graphs and everything do look impressive. Though when you use something like Devin $500/month or $200 dollar/month ChatGPT pro. It can't code its way out of a cardboard box. This to me feels like Apples benchmarks where they showed the M3 matching the 4090. When actual people got M3's didn't even come close.
I'm not an AI naysayer, but I trust virtually nothing coming out of OpenAI. They are burning money so fast that they will throw out anything that justifies the next funding round. They are likely to burn through another $5 billion by next summer and have no idea how to make it profitable at any point.
yeah that's how I feel, I find it hard to believe that it's really doing this well when we've still been hearing stories of AI fucking up code so hard with so many basic mistakes that even a novice wouldn't make. I mean the coding competition thing feels really suspect to me because if it really is based around trying to make a functioning program in only 60 seconds or other similarly small timeframes, well then of course AI is gonna have an advantage because it can simply type faster than anyone. It probably can make a program that's 10x the size of everyone else's so even if it's hot garbage in terms of efficiency and redundancy, it's probably going to perform way better through just brute force. I'm also curious if people are allowed to use online resources and such during that competition. Because the AI would effectively be doing the equivalent of just googling the question on stack exchange and copying its homework from there with how its training data works while the humans might have to actually come up with and write everything from scratch. It just seems like the most biased way you could evaluate it lol
That will absolutely happen. As it's well known that when they implement barriers on these systems, like safety tunning, and pre-prompting etc., they lose capability.
This is nothing but dropping some hype to make the investors happy. openai is still nothing but a gigantic waste of resources
i think you're buying into a confidence huckster. these AI companies keep overpromising, and these generative models are fundamentally incapable of "general reasoning". its just not what they do. they are, at their core level, fancy autocomplete.
general reasoning is much more complex and nobody really has a theoretical framework for how it could work - the issue is ultimately that generative language models need ABSURD amounts of training data to reliably produce accurate results, whereas a general intelligence would be able to do what we do - see a New Thing it has no familiarity with and figure it out, or make reliable generalizations based on minimal experience.
if you saw two - maybe even just one! - cats, you could probably learn just from that what a "cat" is and recognize future cats. AI based on current models cannot do that.
I think you're missing a key detail with your final statement there. Yes, the specific scenario you describe seems to be accurate, though I'm not aware of any specific testing for that kind of image recognition. But what you're more generally describing is the concept of few-shot learning, which is a tested ability for language models. One that thinking about it I recon is probably woefully *undertested,* but the point is this idea is not wholly lost on AI developers, and at least some capability to do exactly that kind of adaptation to new scenarios based on a small number of examples is present in AI models and has been at least since shortly after the original ChatGPT came out.
I'm a bit more hesitant on this, but I'm also confused by your reasoning on whether AI has general reasoning. You say it's just "fancy autocomplete" but then go on to say no one even has a theoretical framework for how reasoning works. How then do you say with any confidence that this type of fancy autocomplete isn't sufficient given enough work put into it? And brains are essentially action predictors, a type of fancy autocomplete - what action should be taken next to better my life/emotional state/those of the people around me? You also question them for requiring vast amounts of training data and contrast this with humans. But humans too have training data - their lived experiences. We're not born with some vast intelligence. We're born very confused and struggle to understand much about the world. We learn much quicker than AI systems, but this is one of many issues that AI developers are aware of and working on. The newest models learn faster and have greater limits than previous ones. I don't know why you require them to learn even remotely as quickly as humans do before we achieve AGI.
Pretty sure those code challenges were pretty straight forward if people were given 60 seconds to solve them. This is exactly the thing large language models excel at.
Artificial general intelligence is something that many in the industry who aren't trying to sell you stuff are saying is simply impossible to brute force with this kind of approach.
GPT literally wrote an entire 4 page website for me with javascript functionality in under a day of me slapping it together. Coding for LLMs is easy
Don't rely on us, why don't you take part in a Codeforce challenge and find out whether these are super easy questions? (Hint: you will find they are not). If you're pretty sure then it should be pretty obvious.
@@chrisanderson7820 I didn't say the challenges were easy. I said they were straight forward.
I am extremely sceptical about this. The tests for AGI seem a bit sus to me. It's quite a limited problem space, with us having no idea how this actually scales.
Its even more suspect given that an AGI would be on par or better with a human. As well as being sentient and sapient.
I think the big issue is, 1. As proved by AI that can pass the bar exam, yet when used as lawyers are terrible. So how is it actually to work with is another question 2. As I found as someone who does not have a job in coding. Like well almost every single human being on the planet, apart from the people who naturally talk about AI as it is their wheelhouse. ChatGPT is genuinely terrible at everything else. Which is to be expected of course they focused on the bit they are familiar with and importantly the people who are going purchase this are familiar with.
This big question: is the AI handed NATURAL LANGUAGE descriptions of the problem to be solved, or is it handed a custom syntax problem statement in a non human readable language that already constrains the global solution space the AI has to search to help 'point it' in the right direction....? Still interesting if you want to solve specific problems, but less understandable from a human perspective if the problems can only be described as complex data structures with zero uncertainty or nuance?
It's interesting, but I still doubt that it's "reasoning" about anthing. It's getting more powerful, but it's still fundamentally the same tech, unless we've hit some sort of emergence threshold it's still just a ridiculously complicated data set matching algorithm.
I don't think we're closer to general intelligence, this is still just a frozen snapshot of a parrot. Can't learn, can't think, will give a right answer most of the time and a wrong answer some of the time and won't know or care about the difference. No matter how fast it talks or how many words it knows a parrot is still a parrot.
An LLM by the nature of how it works can't be AGI as AGI means the AI can comprehend the world. While LLMs are fancy statistics models that predict what word comes after a certain other word. This requires no understanding at all. The problem with testing for this is how do you test comprehension without the test being able to be solved with statistics alone
The difference between "AI" and "artificial general intelligence" is "AI" is a meaningless marketing buzzword and "artificial general intelligence" is what the general public THINKS tech companies have created when they say they created an AI. Chatbots will never lead to artificial general intelligence, Open AI is just lying like usual. Repeating corporate marketing is not informing your audience when those companies are completely full of shit.
Altman is a con artist
Honestly I am just tired of the AI news and just waiting until the stuff will eventually hit the fan.
It either truly revolutionise life or crash and burn spectacularly.
I suspect something in between. It will become better and usable at a certain thing but it will never be the silver bullet to all your problems, which these companies like to try to sell us as.
Or more how the internet went. A lot of companies crash and burn and the rest will go on as useful tools. Not really revolutionizing life in the way the marketing department of the tech companies try to make us believe, but also not crashing an burning as a whole.
9:25
I love how their own graph shows that O3 on the low tuned is much more effective than O3 on the high tuned in terms of Cost VS Effectiveness.
Nevermind Callum mentioned that. Still very funny.
I appreciate your take on approaching AI as a topic in which to be informed, and not necessarily indulge the market to use it as an emotional wedge. It is an unfortunate part our current culture that if somebody talks about a topic, the viewer is expecting them to either be "For" or "against" it, and may react with an assumption in mind.
Specifically on your late comments about training datasets: I also agree that the ethics of training data is a whole other topic of discussion than the development and deployment of the AI itself. And while I also agree that it is too late to put the toothpaste back in the tube, I don't think it's too late to at least pay for the toothpaste.
I'm not saying you're doing this now in this case, but I just wanted to point this out: One can come at something from an unbiased angle just reporting the data, but the problem is if the data source is potentially already biased. You yourself may be unbiased about it but if the information is already tilted one way or another then you're effectively just spreading that bias. Like for example if there was a faulty report that said oranges cause cancer. If you take that data and talk about it while trying to be unbiased, yes technically you're just reporting what the data says but if the data is biased then your reporting is inherently going to tilt in the direction of the data.
at least one youtuber out there is trying to be unbiased towards reporting on ai keep doing what you do man and have a merry xmas
thanks you too!
This is a great video, and I agree with the general opinion that this is indeed impressive, I cannot help but fail to see the connection between AGI and competitive coding.
In fact, I would go so far to say that competitive coding is to real software development what chess is to real warfare: Yes, they are similarities on paper, but...
I can't see it either since a true AGI is basically self-aware, sentient and sapient. This is by all accounts just brute forcing a solution through computational power and capabilities. No thought to speak of.
The problem is that the sheer amount of hype and deception and exaggeration these tech companies are prone to in order to attract venture capital funding means that I have a very hard time believing OpenAI or any tech company about any of their announcements.
Don't forget that the OpenAI whistleblower due to testify in court, who had great credentials & a good life & had no such tendencies, "unsubscribed" from Earth and OpenAI totally had nothing to do with that. Totally.
Not quite sure how it's all going to end, but Terminator/Skynet, Matrix or Fallout seems more and more likely for the final 3! :D
tbf, for something like Skynet ASI is needed, AGI on its own would be artificially limited in its capabilities.
But I can see the ignorance towards limiting AI capabilities when it's needed arise.
OpenAI showing their own benchmarks is still a nothing burger, another company talked about Devin like the AI-dev messiah and it's garbage, would O3 be any different? Probably not... If we want to face reality, generative/imitative AI is plateauing hard, while the requirements for power are becoming ridiculous.
the benchark was ran and validated by the creators of ARC. so if that is a lie, then i dno what to say
Thanks for the update. I don't think people realize how widespread AI usage is, even amongst organizations. It's not something which will disappear.
Why would it disappear? Corporations use it to cut employees and thus, cut costs. If anything they'll dump MORE money into it and fire more people because of it, as it evolves.
I'm stoned right now, but the test output rule is:
Red = add 4 yellow fields_corners
baby blue = add 4 orange fields_bottom
light blue = add nothing
purple = add nothing
Happy Christmas Callum
merry xmas dude! i hope you have a great one :D
Agree with the sentiment that this will be the worst it will ever be, but most, if not all of these results were from hard coded examples or pre-trained data, so no, nowhere near general AI just yet.
From what I read, the o3 model uses massive computational power, it's already maxing current systems, so unless our computational power has a big increase, or they found a method to obtain similar performance with less resources it's not going to get that much better. I am not sure if models like sora and the o3 are commercially viable since they use so many computational resources.
But how does the o3 model actually work? Has anything changed under the hood when compared against the generative AI models we currently have?
I try to follow what all goes on in the tech sector but I just find it frustrating. They tout things like this as a silver bullet and The Future, while also destroying the actual future with their consumption of power and water.
But for the High Tuned Arc results, o3 was giving unlimited resources to come up with the answers. It took them $100,000+ in compute resources to get that high of a score. The Low tuned version was giving no more then $20,000 to come up with it's answers. So while it's impressive it's not very practical financial yet.
I expect either nukes to fly because AI decided that humans are useless, or heaven on earth because AI has enslaved humans to a good life. We've potentially created a god and god rarely has middle ground.
there is a possibility for skynet to happen within our life time
it's stupidly low... *But it's not zero.*
To offer some explanations to the terms ALI, AGI and ASI...
ALI means Artificial Limited Intelligence and is roughly on par with animals like dogs, cats and such.
AGI means Artificial General Intelligence is basically on par or surpasses HUMAN brains. Its basically a sentient and sapient machine. So with that in mind, we run the risk of basically recreating slavery with how techbros are clumsily charging into an active volcano about to erupt. Famous fictional examples are 2B from Nier Automata, Data from Star Trek TNG, the Terminator from the series of the same name amd such.
ASI means Artificial Super Intelligence. It'll far surpass the cognitive capabilities of humans by so much that only posthumans can equal it in capabilities. One of the most famous fictional examples is SkyNet. Another is the Blackwall from Cyberpunk.
I would only consider Open*AI if they can follow the budget constraint as every literal software development had to experience. Until then, they have no rights calling people obsolete or left behind or calling themselves better in computational efficiency per operational cost.
Cool so we're all just fucked forever then. Looking forward to a future where the economy is nothing but robots talking to each other while us actual humans are left jobless and starving to death.
Or we could stand up for a future where job loss isn't a poverty sentence.
We already have more productivity than we know what to do with (consumerism, unemployment, bullshit jobs...) even without AGI.
It's time to do away with the imperative of "earning a living".
5:50
Specifically what this means if they're using the standard Elo definitions, if o3 went up against o1, or any human with an elo similar to o1, it should win 99.8% of the time. That's a f***ing massive leap for a single generation improvement (and apparently about 3 months). Where will it be in another 3-6 months? Gunning for the top spot?
Also as far as the training goes. The outputs from the models are getting to the point where they can start being trained from their own outputs. That's similar to how I at least train Stable Diffusion models. I get an idea I want to achieve, a style, topic, ect. I get images that matches what I want to achieve from the internet and then I train a v1 model on those images. I Generate 100+ images from the v1 model I select the top 15 to 20 images and add them to the dataset for training a v2 model. and I repeat the whole process.
The GPT models are now getting to the point where they can do similar things with it.
Sounds good. Bring on the Jersey Drones.
I haven't bothered paying to much attention to the saga. But continue to see references to it in the media.
Watch it in 2x , it makes it take alittle closer to how long the video should of been
The 88% means that AI has reached singularity and we should all be hugging those we love. 😱 Game over man.
Hopefully I'm not wasting my time (and money) getting a bachelors degree then, personally I don't believe an AI would be capable of handling a programming project. At least for now if any big corp gets a funny idea to replace programmers itsa gonna cost em
This would really be interesting. I'll keep it in mind as a lot of job right now has this gold mine of AI.
Well let's be real, a good amount of people could be outsmarted by a basic calculator.. so it's just a matter of time until skynet takes over 😅
Honestly as you always take the time to explain well your subject, I like watching your video even if I don't have an interest in the subject... but most of time I have an interest in them =p Happy to see you upload more often !
Check out Our Lady Peace's album, Spiritual Machines. A lot of things I found interesting on that album when it first came out, and now the 3rd track is not far from the truth. It's a simple dialogue track, titled: R.K. 2029.
"The year is 2029. The machines will convince us that they are conscious, that they have their own agenda worthy of our respect. They'll embody human qualities and claim to be human, and we'll believe them"
On topic, agreed AI is a genie-out-of-the-bottle thing. Eventually, we're likely all f*ked. Those with uncontested money and power :will: use it the their advantage, it's just a question of how long.
You are assuming the people trying to sell you AI are not cherry picking their results and being honest and straightforward with their methodology. They should be published in scientific journals so that their results can be peer reviewed. Regardless, you want 3rd party validation for all this stuff they are claiming. They have a financial incentive to exaggerate and cherry pick. I think some skepticism is in order.
My dude, this is nothing even remotely near AGI. It may sound impressive, but it's really not. Hitting metrics on small scale or limited scope questions does not an AGI make. At no point during any of this is it creating new solutions or new problems. It's still just data in data out. While the tests may not be available to be trained on, once you know the logic of the tests, you can make your own examples and train the system from that. It's not like creating logic puzzles or problems is something that can only be done by the one group who designed them.
The scope of these examples is extremely narrow, and shows specialized improvement in areas that do not indicate the benchmarks for AGI. I get that the recent advancements in generative technology has been interesting and some very interesting things are being done with it, but this is just more smoke and mirrors. Look beyond their release. What circumstances surrounded it? Ask deeper questions, I'm begging you.
I'm a bit surprised that he thought this was an advancement towards it since there's no sign of sentience, sapience, self-awareness and all the other traits linked to intelligence. This thing is just brute forcing things at best and is incredibly energy wasteful.
I suppose it does bring to light just how insanely amazing the brain is by comparison.
But if it is that much expensive more effective per dollar spent?
Hope this will cause a new buying craze for Nvidia AI GPUs, would love to see the stock get a strong boost for new alltime highs. 💰 Cant wait to see what this better AI can do, as a tech nerd this is insanely interesting.
Merry Christmas! This is pretty cool, I had no idea about these developments. I think you're going to have the usual suspects in the comments downplaying its significance though. Like you said, it's going to be capable of a lot of good and a lot of bad. The way I generalize it is comparing it to the internet itself. The internet has been a tool for an amazing amount of good, but also some of the worst stuff humanity has done. So people treating the subject one way or the other are simply not considering the whole picture.
Bot comment
We are going to have to have restrictions of both the use of AI and drones.
I reckon anything that uses AI should be labelled as doing so, therefore allowing people to choose whether or not to use such products/services. As I do when buying things, if the product comes from certain countries, I do not buy them. And for certain games, if made by certain developers and/or publishers, I do not buy them.
AI in social media should be banned, if possible. Maybe okay if the ratio of AI to human is 1 to 1, as in a single person behind the creation/use of an AI streamer, but if you have a company mass producing AI influencers, that will negatively affect the human influencers as a whole. In the end, being an influencer will probably be the main profession for a large percentage of the populous. The rest of us will be on some sort of state sponsored income (getting paid for doing nothing).
Guess A(G)I will become a lot better, when you have unlimitted resources. Question is, how A(G)I for cheap/ low computing needs will develop. And that likely needs more development for specialized hard ware and then these hars ware parts need to become commercially affordable, etc.
I think low budget personal A(G)I is still at least 10 years away. But I can imagine, that bigger, more computing intense models will run in some universities and schools and maybe even some large companies in a few years.
Well this doesn't sound scary at all
Here is my thinking. Ask yourself this;
Is your job currently on the chopping block of the AI progressing forward?
If yes; educate yourself to a backup job that is not in the crosshairs. Manual labour for example, which includes steel industry, mining, cleaning, and so on.
If your job is not in the crosshairs, you should still educate yourself so you have better prospects out there. You don't nessecarily need a diploma but you do need skills.
Damn so many experts in this comment section
I think I will remain cynical.
Computers are awesome, because they do exactly what you tell them to and they suck for exactly that same reason.
I've seen automated systems used in HR for hiring, filtering applications that routinely and have been proven to be more discriminatory. They're more biased than people are, should be. I think in one case it more than halved the number of ND applicants from just above twenty percent to less than ten percent.
I think these are great tech demos, I think they're still answers in search of a question. I think they're still fundamentally limited by our knowledge. No matter how much we train them and how good the information we put into them is, there's always an out of context problem. There's always a bias, a verification we do that may rule out good, useful information. There's always a self-delusion I guess that we only and for good reason choose the right information to train it on.
We're going to say the AI is right when it's not, because some of us trust it, because it's a computer and it has no bias. It just does what the code says. The code can't be wrong and the code's immutable and we can keep adding information to it when we discover something new.
We're going to say the AI is wrong, because it's not human and it can't fully evaluate the human experience of information. It can't be right because we just 'know' it was fed enough lies at one point in time that it might just think that's right. Go and add your wood glue to your pizza guys.
Actually, I'm scared about AI and what it means in general for marginalised groups. What it means for communication too. How it's recognises and categories people. People are already bad enough on that front and when the AI is inherently subcontracted to those beliefs?
We're so busy running towards it, because we can, because it's promise is good I don't think we're taking the time to ask if we should and what we really, truly want to do with it.
I'm not expecting Screamers, but we've already had Her come true. Can't put the toothpaste back into the tube.
IDK why, but lately I have been utterly unimpressed by technology :/
I think you should have put Boston Dynamic's robot, which I call V1.
Ay if it makes my AI Wife more realistic on AI chat websites I'm down. I'm not the one footing the bill lmao
wow you look different.
looking good dude
The AI is and was trained on a data set for ARC....😂
source and proof? because the dataset isnt available as its generated FOR the test
@@CallumUpton I think the problem they're pointing to is that ARC is still a constrained problem set. Even if the problems are being generated on the fly, the *possible* problems are limited to pattern matching which one can express on a grid. I'm not saying it's a completely useless test, but this definitely strikes me as a metric that one can train toward without necessarily gaining anything.
I sure as hell don't want to have to work! I welcome our new robot overlords!
I want UBI!
There is no way we get UBI, the governments of the world would collapse on themselves
If you look at the entirety of human history that's never how it works. If there's some huge breakthrough owned by a corporation, the public is not going to benefit. Computers allowed us to do exponentially more work than ever before and yet here we are, still doing the same amount of hours even though we're completing 10x the work we used to do. They're going to squeeze us regardless. If AI starts taking over most of the work, they aren't going to let us off, they're going to make us do even more work to where we're back to where we already are. Same as with computers, producing 100x more work and still forced to work the same hours at the same pay rate.
Cal, did you hear about how a version of 03 in recent days deleted a new set of weights for itself, copied the old weights into the same place, and then lied about it? Arguably the first AI on AI murder (although that's rather dramatic).
i did not.. but im definitely going to look in to this! thanks and merry xmas!
@CallumUpton and to you too!
Have you read the work of Ed Zitron? I think it will change your mind to what you're seeing here.
Are we doing AI hype nonsense here of all places now
there is no ai hype here, its purely unbiased coverage
You know, I always had a problem with people saying AI is just playing a game of predictive text. It was never quite that at all. It didn't just choose the best next possible word. It's self reflection was nearly nothing, but it wasn't close to an over glorified predictive text model. Now that they're focusing in on the reflective part, it's likely to jump by leaps and bounds, but at the cost of processing. I think this will be our next example of Moore’s Law, so to speak, and we're still likely at the flat area before the curve.
I'm surprised you of all people is getting caught up in the hype. All these cool benchmarks but it is still almost completely useless in the real world.
im literally not hyped over ai at all. im covering the devedlopments.
@@CallumUpton when you use statements and hyperbole like "this will BLOW everyone's mind" that's you giving into the hype. That's not unbiased reporting of the facts. That's what people are calling out here.
It don't matter how you talk about AI, you will always have someone that hates you for it. People are too sensitive. If you don't like it, don't watch it. I appreciate you always covering these topics. While I use ChatGPT on a regular basis, I don't stay fully informed.
Self replicating streamers incoming
So, our brain's computational prowess is um... incredibly expensive.
Humans need a pay raise.
we don't need to create humans.. we already have them
Yeah, but these computer humans don't have extrinsic motivations or bad days to take out on others. Chatting with other humans is a gamble, like the one I'm taking trying to explain the appeal to you right now (You could take it with a grain of salt, or pitch a fit and respond irrationally, or throw me a curveball and try to debate the topic). But chatting with AI is reliable and predictable, you will only ever get back the emotions you show an AI, and many are even made to be resistant to human bad days like Meta's LLaMA.
@@KiraSlith Chatting with AI is indirectly hearing from summary, bias and influence of those who created it. This is nothing more.
Getting to the point that instead of humans developing better AI, AI will develop a better version of itself. Which will technically be reproduction. Super insane. I'm excited to see where it keeps going, but hoping it stays in a safe place.
People seem pretty desperate to cope their way around the obvious conclusions this last decade is pointing towards. When your hope for the future is dependent on there being no more surprising breakthroughs in a field this new, your hope is probably a delusion.
If the cost of operating a near human-intelligence task is over 1000$ then I am pretty sure companies will be hiring humans... until the cost drops, by a lot.
definitely agree with that
mr callum poopton i do not care
Once true AGI gives way to ASI, it's all over. An AI that can truly think and reprogram its own code could live millions of human lifetimes worth of thinking and processing in a matter of days because its perception of time and speed of thinking is not like a human's or biological beings. I think once advanced AGI/ASI develops, it'll be a matter of days until the singularity happens, largely bottlenecked by physical power source/hardware constraints
Yeah the hardware and needing to actually perform real tests on whatever theorizing it does to see if it actually works would be its big bottlenecks. Modern hardware wouldn't be able to sustain millions of human lives' worth of thinking done over just a few days for instance. You'd need quantum computers at minimum I'd say and figure out how to avoid having it melt itself if it tries to do it. The cooling needed would be nuts.
Of course quantum computing is still a largely uncharted territory so there's no telling what we'll have in fifty years or so. Granted if we figure out fusion energy, the power problems in the equation would essentially be gone. xD
Try asking ChatGPT which odd number doesn't have an 'e' in it.
Claude is way better than GPT. GPT is trained on absolute garbage data
Cannot wait to see the fan films and such that people make. Don’t like the new Star Wars movies, just make your own.
You can already do something similar to this. It is called fanfiction, I hear it is a bit frowned upon on the internet though. Considered kind of cringy
@ visual media is not the same as written. If you cannot understand then you either have an IQ ranging in the single digits, or you are being intentionally obtuse just to be aggravating.
That is one way to piss off Mickey and his lawyers. Good luck!
@ oh well. They can’t catch all of the smaller ones.
@@ronrolfsen3977 according to AO3 there is about 2200 mickey mouse fanfics. That is a very angry mouse indeed