Its not just the hardware. As soon as the papers for previous gpt models were released, lots of super smart people optimized the process by orders of magnitude. I think we could drop the hardware overhead by a significant amount if open ais models and process were open.
But it won't happen. Because they want maximum gain for their investors as required by US law. I also don't think there is much to optimize since all researchers for AI mostly only implement for companies like OPEN AI ideas about ai that have existed for some time
It wouldn't but you're free to delude yourself. Even if you have the AI, the weights and the training data, you can't just optimize the shit out of it. You need the hardware to run the training again to actually change the weights. As for the open source models of GPT, you might want to look into HOW those hardware requirements were lowered. There's a reason OpenAI isn't doing that, and it's because it limits the AI's capabilities. They could if they wanted to. That's the mini, low, med and high thing in the chart. It's the same with O3's low and high. The ways to "optimize" it to run on more limited hardware are known, not some trade secret limited to "open" models. They're not used because they result in inferior end products. As of now we have entire AI families that can run from anything from your phone to a supercomputer. And guess what? the bigger the model the better the results. None of these companies will hamper their best models, that's why the other models exist. You're never going to run O3 on your phone. There might be an AI as powerful as O3 that needs a lot fewer resources that you will be able to run on your phone in a few years - maybe months - but it won't be O3. It will be the next big thing. And you can bet that when that happens there will be something that will use as many resources as O3 needed to achieve these results or more that will be above it.
@AlucardNoir there were limits before too, but smart people found lots of ways to optimize training and inference significantly. It would be wrong to assume they've fully optimized everything at this point.
I still maintain that we are in the "vaccum tubes and mainframes" era of AI, and we need to rethink how we are using these models. If we do that, in the future we will look back at this time period with horror
Well yeah humans can get high scores on ARC fueled by a banana, not hundreds and thousands of dollars of energy. We have so much space to optimise ahead
Yeah at least he has the integrity to admit he was wrong when presented with the evidence. I was pulling my hair out for the last year saying you guys are missing it. You are letting your emotions control your judgement.
@@edmonddantes6443 Yeah, not sure why people were convinced by prime and theo on the progress of AI - their argurments were mainly "I don't think it will happen cause I feel like it won't happen" I obviously like their channels - just hard disagree on their AI takes (or pre o3 AI take rather).
The question on ARC is NOT just about the score. But how you acheive it as well. If the "technique" is the memorize all thoses "new challenges" to bruteforce your way in. Well you didn't achieved intelligence whatsoever and it's just marketing bullshit.
You can't exactly brute force it since the problems are not the same each time and completely novel for all tests. Even if they have similar data from millions of arc type problems it doesn't mean they would reach the same solutions. But that being said, ARC does have some pattern recognition and limited types of problems so it's only a very very small test of intelligence. Would like to see o3 perform IRL embodied in a humanoid and has to do spacial reasoning blind tests in 3D space. That'll be a much more practical test for our world. You could be right that it's marketing bs, but it could also be a milestone...and many more to come before true AGI >.
Yes, but also, that means it is tunable to a topic like this, people are excited because you couldnt teach a fish to do this task, it is nearing spontaneous pattern recognition that is scarily close to the unknown of the human brain
I'm interested to see how photonic processors get integrated at scale. That alone could save a lot on energy costs, especially as _new_ datacenters get populated with a high proportion of hardware dedicated to AI/ML computation.
Well, well, well. But those tests could be leaked to previous models, because they've been used to score arc-agi previously, and as we all know OpenAI requires input to be in a raw format not encrypted. Am I wrong?
@@alex-rs6ts hmm... The questions are secret, but they are become available for OpenAI once you put the question through their API, in addition, there are even publicly available datasets that you can train on and come up with thousands more examples that follow similar patterns. Nothing prevents them from using those questions to create a dataset to train on.
@@yzhishko yes, ARC has a lot of holes potentially, but their model could've still done reasoning to boost consistency for the problems. Like we recognize millions of patterns naturally without realizing we have that data in us too (from just being alive and surviving). Having only thousands of examples is nothing, so the model would have to be insanely smart to reason through novel problems with only limited amounts of data. It could just have enough data to generalize on the rules, but what's impressive is that it has the capabilities to spatially reason accurately where everything is. I tested some previous models (like Claude/Gemini/GPT4 etc) on ARC and they couldn't even recognize what colors the squares were properly, or figure out where things were at. Even if it's a gimmick, the model is still improving in multimodality and several ways for sure.
Can you just stop with the "changes everything" clickbait titles? 🤨 'Changes everything we thought we knew about AI" would be okay, but "changes everything" is just empirically false and makes you look like a scammer.
The AI on the graph costs 50000% more than a stem graduate to complete the tasks and even then has an error rate that is 1000% higher than the humans???
this isnt a good takeaway. In 8 years thats gone. I think its pretty clear this is a watershed moment. We have spent countless billions on fusion and have yet to see a single fusion reactor even with proof of concept. While more was certainly spent on AI. It is undoubtably involved at the root in so many businesses and lines of work due to sheer convenience of workflow. Money will keep flowing to this, tech will continue to advance rapidly and hardware will continue to both get vastly more powerful and cheaper. I had doubts before even with o1. But consider that o3 is closed source and the worlds most brilliant minds have yet to have a go at optimization. We are at the beginning of an era, like getting to see the internet being born, or the first shitty overpriced command line computers with green on black monitors.
@josephvictory9536 the thing that really puts things in perspective is that computing isn't even that old compared to the history of mankind so future generations will have it crazy
We may be having a bit of an “over-automation” moment here. Like when automakers decided to try and replace all humans with robots and quickly learned that humans are cheaper and better at some tasks. Can’t say for sure, but I think we are fast approaching this point. Time will tell.
I remember when they announced sora.... Then they released it. Sam Altman is as trustworthy as an announcement of the US about any country having mass destruction weapons. AGI is really close... In geological terms, about 500 or 700 years, but we will get there eventually I guess.
Eletronic computers have existed for like 60 years, the industrial revolution has been going on for about 300 years, what makes you think 500 years is a good estimate?
I say we're still where we were if AGI is soon to exist but is extremely hardware limited for even basic tasks at the corporate on big super machine level, then we won't have AGI, not really, we're still waiting on new algorithms developed by contests like ARC-AGI to create the future we're looking for.
Yeah, but the wait might not be too long. They could develop dozens of benchmarks within 1-6 months, and all of them could get crushed within 6 months to 2 years. We might not even need some of them to get crushed and the goal post is moved further and they are more like ASI benchmarks than AGI at some point too. Also could reach AGI before we have the benchmarks to measure it properly, and it's already here by 2026, who knows. I just know that we are still making serious progress. 88% of way to AGI by Dr. Alan Thompsons conservative AGI meter now. Went up 4% after o3 was announced. On average goes up 1% a month so it's probably 12 months away. December usually goes up 2 to 5%+ too even if some slower months before it, noticed a pattern the past couple years with technology releases during December. I would bet on some baby AGI system before 2026
This is ai not crypto, the development here has real world changes (major ones) , they'll very likely outrun the pump and create agi before the dump period, compare it with Google instead of something like ftx
Isnt the only cost of GPU farms electricity. 2000$ is still too much per task. The human brain is so efficient that Gigawatts of electricity is needed to match it.
Human brain efficiency is greatly exaggerated. Brain does far less compute than many believe but it uses better algorithms. Better algorithms make it far better.
Gonna be almost impossible to work online for 99% of humans by the end. Will be some sort of AI job pandemic for sure. Then in 2027-2030 humanoids will replace all physical jobs too, we are definitely in for a huge shift in how society functions!
They should have mentioned that O1 Pro reaches around 50% Regarding the cost increase the more interessting option imo Bur of course interessting from a benchmark perspective
The hardware overhang exists, this o3 approach is closer to bruteforce than brainlike algorithms. It is likely with as little as 1 to 10 peta ops superhuman score in realtime at low cost is possible.
I still think AI is a terrible path to continue down, and I'm not even talking about the potential for AI to revolt. I mean the extreme hardware and power usage and the associated environmental impacts, like carbon emissions. Then, there's all the generative crap that will inevitably be used to fully replace professions and downsize the work force, leading to a further widening wealth gap. In my opinion, the negatives of AI far outweigh any benefit we can gain from it, and it's not even close.
The energy costs to run Twitter and Facebook are much higher and AI actually produces something of value, unlike those platforms. The advances made by these large models also fuel advance in models which can run locally on your PC. But yes, the job loss is a legitimate concern that needs a proper response.
@@travistarp7466 If the fact that I'm happy is implied in the premise, I'm good with that. Private ownership makes sense for certain things but the main technology, the AI, should be thought of as a public resource that everyone can access, though there are justifiable and less justifiable reasons to distribute that access unequally.
Yes, quite expensive but you get what you pay for - impressive results. Given time the hardware will become both faster and less expensive considering the demand is destined to increase and the advent of AI processing for/from tech companies like Nvidia is still pretty new/formative. As for energy demands... really the only way forward long term is nuclear.
Yeas, hardware becomes faster, but it doesn't follow the scale laws anymore, we reached the limit for a smallest transistor and no, hardware doesn't become less expensive. In fact new generation becomes 30% to 70% more expensive with 20% to 50% improvements. Companies don't want to lose big bucks, especially if there is monopoly and a hype cycle.
And just like that, all your favourite pea-brained RUclipsrs who claimed AI was all just hype and regurgitation are wrong. Feel bad for all the software engineers who wasted time and money getting a now useless degree, especially the ones who are still in denial. Programmers are the first to be replaced by AGI.
Go easy on yourself. You weren't "wrong" a few months ago, you're a RUclipsr and a Web developer who mostly doesn't know what he's talking about, remember?
Does the news seem dead about this? The thing I have realized is that my feed on Twitter is fairly empty of o3 news. The first nugget of info I saw on it, I thought was fake because there was no news about it on Twitter. I finally saw a bit more, but the hype seems dead???
Novel synopsis: Working title: Tomorrow is Reality John is a cyber-security expert who start to discover unusually sophisticated real-time cyber attacks on utility providers and financial institutions. What unravels is a web of foreign state actors using advanced AI to write their own complex AI subroutines to perform cyber attacks. John is racing against to pull the proverbial plug out of the wall to stop the attacks.
meanwhile Google Deepmind develops Willow, a working quantum chip able to do a complex equation in under 5 minutes that would take any other supercomputer 10 septillion years. Designed with ai computing in mind.. I'm looking forward to seeing how much it advances ai speed and cost
no, because quantum computer is a hoax. It's just statistical combination of different states of particles at some point of time. It can't solve any real problem and won't be able to. We need different, not a Shredinger equation, physics to solve computing at quantum level.
It's almost going to be as good as Google's Gemini. Still a ways away and quite under but maybe by next year they'll have something that can compete with Google.
With all these compute needs, will pro users get unlimited access to o3 low or you will need a new $2000 subscription tier for that? Hopefully it's either unlimited or a generous limit (more than 50 per week :-)).
Thanks but I'm still holding my beer. Benchmarks are benchmarks, and we know how easy it is to screwshit up nonmaliciously with JS frameworks. I need functionality to work for more than just squares. Also, is this REALLY taking your job? Realistically no. As to the safety thing...honestly... this mostly marketing and monopoly protection.
Waiting for first rogue ai that runs rampart through the Internet. Imagine if it would take over a botnet then use the computing power to find security holes in android and infect every phone on the planet.
blablabla, are they saying AI can't do abstract zeroshot pattern recognition ? because they selling computing power instead of quality increase or invitation , instead they do buzz words like AGI for profit when OPEN AI was in their name ,, don't get me wrong AGI is a subfield but I think I tend to be on the other side where they believe AGI isn't a thing . even if we achieve AGI they don't model the knowledge break down (blackbox)
How can we know that the new models had really "generated" the answer for the AGI test when it's already so "old" and it could have eaten data that has the exact answers to the benchmarks?
Statical software will not be conscious no matter how many problems it solved. Still just a LLM at the end of the day with the same problems if o1 is anything to go by.
So much cope in the comments Everything is expensive at first. Terrible old TVs used to be a luxury product. The cost is going down and the quality is going up.
Please Theo, you and I both web3.0 is just running on hype. Give it some time, people will forget about o3 just like they forgot o1. Btw, where the fuck did o2 model go?
They should have mentioned that O1 Pro reaches around 50% Regarding the cost increase the more interessting option imo Bur of course interessting from a benchmark perspective
Cryptobros and ai enthusiasts competing for the biggest carbon footprint
both destroying everything
Right... because we were so damn environment friendly until now.
@@attilakovacs6496 You're not wrong, but now we're just speedrunning it.
I wanted to reply to you but youtube keeps shadowbanning my comments.
Discourse is not possible on this platform.
@@kwinzman don't say stupid things then
No wonder the machine will need to turn us into batteries in the future. They need to power their AI. 😅
But they spend over $1 million to solve all Arc Prize tasks ($3000+ per task).
It’s crazy it can solve it but certainly not feasible yet. But it feels like it is within reach if they just lower costs and speed it up
🤣
TBF end-user cost =/= the cost they pay. I'm not 100% sure if the figures provided are end-user cost or actual internal cost.
How much you want to spend to be human? I'm asking because only humans can get a high % on arc . Until now
@@ignaciosavi7739being human is cheaper for now. Once that changes is when ai will start to change things
Its not just the hardware. As soon as the papers for previous gpt models were released, lots of super smart people optimized the process by orders of magnitude.
I think we could drop the hardware overhead by a significant amount if open ais models and process were open.
But it won't happen. Because they want maximum gain for their investors as required by US law. I also don't think there is much to optimize since all researchers for AI mostly only implement for companies like OPEN AI ideas about ai that have existed for some time
It wouldn't but you're free to delude yourself. Even if you have the AI, the weights and the training data, you can't just optimize the shit out of it. You need the hardware to run the training again to actually change the weights. As for the open source models of GPT, you might want to look into HOW those hardware requirements were lowered. There's a reason OpenAI isn't doing that, and it's because it limits the AI's capabilities. They could if they wanted to. That's the mini, low, med and high thing in the chart. It's the same with O3's low and high. The ways to "optimize" it to run on more limited hardware are known, not some trade secret limited to "open" models. They're not used because they result in inferior end products.
As of now we have entire AI families that can run from anything from your phone to a supercomputer. And guess what? the bigger the model the better the results. None of these companies will hamper their best models, that's why the other models exist. You're never going to run O3 on your phone. There might be an AI as powerful as O3 that needs a lot fewer resources that you will be able to run on your phone in a few years - maybe months - but it won't be O3. It will be the next big thing. And you can bet that when that happens there will be something that will use as many resources as O3 needed to achieve these results or more that will be above it.
@AlucardNoir there were limits before too, but smart people found lots of ways to optimize training and inference significantly.
It would be wrong to assume they've fully optimized everything at this point.
I still maintain that we are in the "vaccum tubes and mainframes" era of AI, and we need to rethink how we are using these models. If we do that, in the future we will look back at this time period with horror
Check out the research people are doing on "wetware" and AI. Look up Cortical Labs.
Well yeah humans can get high scores on ARC fueled by a banana, not hundreds and thousands of dollars of energy. We have so much space to optimise ahead
Theo went from Ai is not going to take our jobs, to "I am really concerned about this" real quick. Thanks Theo!!!
About time
Yeah at least he has the integrity to admit he was wrong when presented with the evidence. I was pulling my hair out for the last year saying you guys are missing it. You are letting your emotions control your judgement.
it’s cool he’s actually updating unlike prime
@@edmonddantes6443 Yeah, not sure why people were convinced by prime and theo on the progress of AI - their argurments were mainly "I don't think it will happen cause I feel like it won't happen"
I obviously like their channels - just hard disagree on their AI takes (or pre o3 AI take rather).
@@NeoKailthasAgreed (tho I always held ur opinion)
Are you saying: it's cheaper to just pay a developer to do the task?
For now
The question on ARC is NOT just about the score. But how you acheive it as well. If the "technique" is the memorize all thoses "new challenges" to bruteforce your way in. Well you didn't achieved intelligence whatsoever and it's just marketing bullshit.
You can't exactly brute force it since the problems are not the same each time and completely novel for all tests. Even if they have similar data from millions of arc type problems it doesn't mean they would reach the same solutions. But that being said, ARC does have some pattern recognition and limited types of problems so it's only a very very small test of intelligence. Would like to see o3 perform IRL embodied in a humanoid and has to do spacial reasoning blind tests in 3D space. That'll be a much more practical test for our world. You could be right that it's marketing bs, but it could also be a milestone...and many more to come before true AGI >.
Are we entering dark age of technology from 40k lore?
What happen to 02? O1 just came out, and now we are talking about 03. I feel like I miss a season or something
they skipped o2to avoid copyright with the british entreprise
there’s a UK telecoms company called O2 + they’re leaning into having terrible naming conventions, are the reasons they stated in the video
They can't name it o2 is because the name is already owned by some UK firm.
It's copyrighted by a company.
They had to skip it to avoid lawsuits.
@@youngreda4410- They should have thought of that before the named it "o1." 😲
LLM reaching AGI is pure cape whoever believe this never dig more than 2days
It literally shows (Tuned) in the benchmark results. So o3 was tuned to this specific problem. Why would not he mention that in the video?
Doesn’t fit the narrative
It wasn't "tuned" for anything you are thinking of Fine-tuning what tuning here refers to high and low test time compute.
The fact that it managed to literally reach 2700+ on codeforces is enough to show that this isn’t just memorizing shit. This is the real fucking deal
@@vectorhacker-r2 your dumb comment doesn't fit the narrative, because neither of you know what you're talking about.
Yes, but also, that means it is tunable to a topic like this, people are excited because you couldnt teach a fish to do this task, it is nearing spontaneous pattern recognition that is scarily close to the unknown of the human brain
I'm interested to see how photonic processors get integrated at scale. That alone could save a lot on energy costs, especially as _new_ datacenters get populated with a high proportion of hardware dedicated to AI/ML computation.
Well, well, well. But those tests could be leaked to previous models, because they've been used to score arc-agi previously, and as we all know OpenAI requires input to be in a raw format not encrypted. Am I wrong?
Yes, you are wrong
They weren't leaked. The questions are secret
@@alex-rs6ts hmm... The questions are secret, but they are become available for OpenAI once you put the question through their API, in addition, there are even publicly available datasets that you can train on and come up with thousands more examples that follow similar patterns. Nothing prevents them from using those questions to create a dataset to train on.
@@yzhishko yes, ARC has a lot of holes potentially, but their model could've still done reasoning to boost consistency for the problems. Like we recognize millions of patterns naturally without realizing we have that data in us too (from just being alive and surviving). Having only thousands of examples is nothing, so the model would have to be insanely smart to reason through novel problems with only limited amounts of data. It could just have enough data to generalize on the rules, but what's impressive is that it has the capabilities to spatially reason accurately where everything is. I tested some previous models (like Claude/Gemini/GPT4 etc) on ARC and they couldn't even recognize what colors the squares were properly, or figure out where things were at. Even if it's a gimmick, the model is still improving in multimodality and several ways for sure.
Can you just stop with the "changes everything" clickbait titles? 🤨
'Changes everything we thought we knew about AI" would be okay, but "changes everything" is just empirically false and makes you look like a scammer.
Sure it's not going to change things overnight, but where will this lead us in 2 years?
A lot of potential for major change
The AI on the graph costs 50000% more than a stem graduate to complete the tasks and even then has an error rate that is 1000% higher than the humans???
this isnt a good takeaway. In 8 years thats gone. I think its pretty clear this is a watershed moment. We have spent countless billions on fusion and have yet to see a single fusion reactor even with proof of concept. While more was certainly spent on AI. It is undoubtably involved at the root in so many businesses and lines of work due to sheer convenience of workflow.
Money will keep flowing to this, tech will continue to advance rapidly and hardware will continue to both get vastly more powerful and cheaper.
I had doubts before even with o1. But consider that o3 is closed source and the worlds most brilliant minds have yet to have a go at optimization. We are at the beginning of an era, like getting to see the internet being born, or the first shitty overpriced command line computers with green on black monitors.
Price will drop, were in early stages
No the average human preformed 64% on the ARC AGI test.
@josephvictory9536 the thing that really puts things in perspective is that computing isn't even that old compared to the history of mankind so future generations will have it crazy
If coping was an olympic sport.
We may be having a bit of an “over-automation” moment here. Like when automakers decided to try and replace all humans with robots and quickly learned that humans are cheaper and better at some tasks.
Can’t say for sure, but I think we are fast approaching this point. Time will tell.
I remember when they announced sora.... Then they released it.
Sam Altman is as trustworthy as an announcement of the US about any country having mass destruction weapons.
AGI is really close... In geological terms, about 500 or 700 years, but we will get there eventually I guess.
you made no sense
@@jacobdalamb Tangamandapio!
Eletronic computers have existed for like 60 years, the industrial revolution has been going on for about 300 years, what makes you think 500 years is a good estimate?
Really close. 500 to 700 years. Pick one...
@@jacobdalamb he doesn't need to make sense . He swapped his brain with an LLM running on an arduino
I say we're still where we were if AGI is soon to exist but is extremely hardware limited for even basic tasks at the corporate on big super machine level, then we won't have AGI, not really, we're still waiting on new algorithms developed by contests like ARC-AGI to create the future we're looking for.
Yeah, but the wait might not be too long. They could develop dozens of benchmarks within 1-6 months, and all of them could get crushed within 6 months to 2 years. We might not even need some of them to get crushed and the goal post is moved further and they are more like ASI benchmarks than AGI at some point too. Also could reach AGI before we have the benchmarks to measure it properly, and it's already here by 2026, who knows. I just know that we are still making serious progress. 88% of way to AGI by Dr. Alan Thompsons conservative AGI meter now. Went up 4% after o3 was announced. On average goes up 1% a month so it's probably 12 months away. December usually goes up 2 to 5%+ too even if some slower months before it, noticed a pattern the past couple years with technology releases during December. I would bet on some baby AGI system before 2026
Sammy Boy keep Pumping Company. But in the end this bubble is going to pop.
This is ai not crypto, the development here has real world changes (major ones) , they'll very likely outrun the pump and create agi before the dump period, compare it with Google instead of something like ftx
still better than your useless startup sid
Isnt the only cost of GPU farms electricity. 2000$ is still too much per task. The human brain is so efficient that Gigawatts of electricity is needed to match it.
Exactly, we just need to optimize our brain.
The amount of time and training needed for human brains to solve these problems is immense.
It won't stay that way for long. Investments are pouring into silicon photonics and photonic computing
Human brain efficiency is greatly exaggerated. Brain does far less compute than many believe but it uses better algorithms. Better algorithms make it far better.
@@diamond_s Algorithms such as...?
I've been saying this since 2018. This won't stop. The biggest risk is the labour vacuum.
What the hell is OpenAI’s secret sauce for one-upmanship. Or is it just a function of ferocious competition.
Does anyone want to think about what the world will look like next year in Dec of 2025 with level 3 Agents, O3, and agentic AI?
stock markets cooked
Gonna be almost impossible to work online for 99% of humans by the end. Will be some sort of AI job pandemic for sure. Then in 2027-2030 humanoids will replace all physical jobs too, we are definitely in for a huge shift in how society functions!
They should have mentioned that O1 Pro reaches around 50% Regarding the cost increase the more interessting option imo
Bur of course interessting from a benchmark perspective
Didn't watch..but the answer is No, it's not AGI
"We're close to reaching AGI" says every researcher ever.
Why would they blank out the cost of the more expensive version?
that shit will break your bank
because none in the whole world would spend so much money to solve useless problem.
The cope in the comments is hilarious.
they are blind can't see progress every year in the AI instead they focus on saying "its a bubble"
The hardware overhang exists, this o3 approach is closer to bruteforce than brainlike algorithms. It is likely with as little as 1 to 10 peta ops superhuman score in realtime at low cost is possible.
they are so excited theoretically own the majority of all productivity..... greeeeed
I still think AI is a terrible path to continue down, and I'm not even talking about the potential for AI to revolt. I mean the extreme hardware and power usage and the associated environmental impacts, like carbon emissions. Then, there's all the generative crap that will inevitably be used to fully replace professions and downsize the work force, leading to a further widening wealth gap. In my opinion, the negatives of AI far outweigh any benefit we can gain from it, and it's not even close.
The energy costs to run Twitter and Facebook are much higher and AI actually produces something of value, unlike those platforms. The advances made by these large models also fuel advance in models which can run locally on your PC. But yes, the job loss is a legitimate concern that needs a proper response.
Commercial aircraft emit hundreds of thousands of times more co2 annually than all current AI combined
the number one thing to happen is a widening wealth gap. “you will own nothing and be happy” is exactly where we are headed if people dont wise up
@@travistarp7466 If the fact that I'm happy is implied in the premise, I'm good with that. Private ownership makes sense for certain things but the main technology, the AI, should be thought of as a public resource that everyone can access, though there are justifiable and less justifiable reasons to distribute that access unequally.
I hope you at least got 5 figures for this little ad
What makes you say that?
@@UN7X the fact that they bought the fucking Nobel prices
Yes, quite expensive but you get what you pay for - impressive results. Given time the hardware will become both faster and less expensive considering the demand is destined to increase and the advent of AI processing for/from tech companies like Nvidia is still pretty new/formative. As for energy demands... really the only way forward long term is nuclear.
Yeas, hardware becomes faster, but it doesn't follow the scale laws anymore, we reached the limit for a smallest transistor and no, hardware doesn't become less expensive. In fact new generation becomes 30% to 70% more expensive with 20% to 50% improvements. Companies don't want to lose big bucks, especially if there is monopoly and a hype cycle.
Theo you have my respect. To admit you’re wrong shows your character.
And just like that, all your favourite pea-brained RUclipsrs who claimed AI was all just hype and regurgitation are wrong.
Feel bad for all the software engineers who wasted time and money getting a now useless degree, especially the ones who are still in denial.
Programmers are the first to be replaced by AGI.
Go easy on yourself. You weren't "wrong" a few months ago, you're a RUclipsr and a Web developer who mostly doesn't know what he's talking about, remember?
Does the news seem dead about this?
The thing I have realized is that my feed on Twitter is fairly empty of o3 news. The first nugget of info I saw on it, I thought was fake because there was no news about it on Twitter. I finally saw a bit more, but the hype seems dead???
Novel synopsis:
Working title: Tomorrow is Reality
John is a cyber-security expert who start to discover unusually sophisticated real-time cyber attacks on utility providers and financial institutions. What unravels is a web of foreign state actors using advanced AI to write their own complex AI subroutines to perform cyber attacks. John is racing against to pull the proverbial plug out of the wall to stop the attacks.
ever since theo been doing these product review/sponsorships "I actually invested in this blahblah" I have cared less and less
meanwhile Google Deepmind develops Willow, a working quantum chip able to do a complex equation in under 5 minutes that would take any other supercomputer 10 septillion years. Designed with ai computing in mind.. I'm looking forward to seeing how much it advances ai speed and cost
Will it run on the Google' quantum computer?
no, because quantum computer is a hoax. It's just statistical combination of different states of particles at some point of time. It can't solve any real problem and won't be able to. We need different, not a Shredinger equation, physics to solve computing at quantum level.
out of topic but does anyone what gif or animated wallpaper that he has on the background? the black white lines wallpaper
It's almost going to be as good as Google's Gemini. Still a ways away and quite under but maybe by next year they'll have something that can compete with Google.
With all these compute needs, will pro users get unlimited access to o3 low or you will need a new $2000 subscription tier for that? Hopefully it's either unlimited or a generous limit (more than 50 per week :-)).
The Curious Case of the Hype Machine.
wrong bro. still expensive. AI gets better but also more expensive. so nothing new here
Don't call it AI
Cope
Short answer? No it’s not AGI
I work in AI research and no. it is not AGI
Thanks but I'm still holding my beer. Benchmarks are benchmarks, and we know how easy it is to screwshit up nonmaliciously with JS frameworks. I need functionality to work for more than just squares. Also, is this REALLY taking your job? Realistically no. As to the safety thing...honestly... this mostly marketing and monopoly protection.
Waiting for first rogue ai that runs rampart through the Internet. Imagine if it would take over a botnet then use the computing power to find security holes in android and infect every phone on the planet.
They just need to find the backdoor. No need for 0days
blablabla, are they saying AI can't do abstract zeroshot pattern recognition ? because they selling computing power instead of quality increase or invitation , instead they do buzz words like AGI for profit when OPEN AI was in their name ,,
don't get me wrong AGI is a subfield but I think I tend to be on the other side where they believe AGI isn't a thing .
even if we achieve AGI they don't model the knowledge break down (blackbox)
Why is everything changing everything
How can we know that the new models had really "generated" the answer for the AGI test when it's already so "old" and it could have eaten data that has the exact answers to the benchmarks?
the arc test is private. If it were leaked, the cost of figuring it would have been much cheaper
Because the benchmark's questions are not disclosed and therefore no such data exists.
lmao at this rate we'll have AI with consciousness by end of next year
Thanks Devin.
Statical software will not be conscious no matter how many problems it solved. Still just a LLM at the end of the day with the same problems if o1 is anything to go by.
@@Easternromanfan cope
Eye popping 👀 👁️🧠👁️
Still think world will need js devs in 2 years? lmao. it’s over.
ofc, you get seniors from juniors.
@@josephvictory9536 mfs heads are in the sand. it’s over over, so incredibly over I can’t even
We won't need senior devs in 2 years
@ This. People literally can’t look at the derivative! So much cope.
There was too much funding and too much research going into AI right now to safely assume progress was just gonna stall.
So much cope in the comments
Everything is expensive at first. Terrible old TVs used to be a luxury product. The cost is going down and the quality is going up.
Please Theo, you and I both web3.0 is just running on hype. Give it some time, people will forget about o3 just like they forgot o1. Btw, where the fuck did o2 model go?
Nobody cares who commented first
Speak for yourself
God has a new name.
Shheesshh
Last
Aye!
🤍
😂😂😂
Early
Fisrt 😊
no i am
First
no im first
no you are not
They should have mentioned that O1 Pro reaches around 50% Regarding the cost increase the more interessting option imo
Bur of course interessting from a benchmark perspective