As a software developer currently working on implementing actual AI-features in our software, I disagree that the "frontier models" are (almost) always better than smaller models, barring niche applications. In many applications, using other small and freely available models, and tuning those according to your needs often yields better results than using the latest "big" LLM. Plus, the computation is much faster and less energy consuming; both things should not be underestimated when thinking about scalability.
You must not know how corporate america works. Eventually, your smaller models will be brought out by the company owns frontier models, the code is buried and products have no choice but to switch to using these frontier models. And if everything else is already using front tier models, then to make the code maintainable, everything will be moved to them.
I also read a memo supposedly from Google engineers in which they expressed their concern that big companies really had no "magic sauce", implying that anyone with some GPU power can set up models relatively quickly. I guess they might be worried investors all betting on few big companies figure that out as well at some point. Would you agree with that?
@@rolyars the only difference is speed...you can run a small model on almost anything now it'll just be slow, and Nvidia's "DIGITS" (AI minibox for desktop use) could make huge leaps in that space...they're going to be very hard to source for quite a while I'll bet!
why wouldn't i wanna be a king after getting rich? so weird. i just want to help people. donate. be happy, live my life. try to make others happy too. is there something wrong with them, or me?...
@@differentone_p With them, but unfortunately people who think like you will never chase riches big enough to be king, only the greediest and sociopathic individuals who can never have enough and will never give to others.
I don't think the hegemony of frontier models is sustainable. The Chinese have just released 'DeepSeek-R1,' which competes with OpenAI's Frontier O1 model, and they have made it available as an open-source model.
@@honestlocksmith5428 "This code repository and the model weights are licensed under the MIT License. DeepSeek-R1 series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. "
I'm a bit confused. Sabine recently stated that AI has hit a wall and is overhyped, yet now she claim that AI companies will dominate the world in the near future. How to reconcile these seemingly contradictory ideas?
I think it's exactly that. Ai is over hyped. It gives a lot of applications, but specialized solutions or problems are not solvable. You need a lot of energy and computational power. Therefore, states can control this technology by simply controlling the hardware
Both can be true to be honest. Because even though a wall in the funtionality an trueness of AI can be hit, the level that is reached can be used by companies to dominate.
Peaking real hard. Sabine is throwing around “super intelligence in a few years” in this video. I’m not seeing products that are any more capable of doing my job than ChatGPT was when it first came out.
You did a good job of pushing back against the nationalistic approach to dealing with AI (our country NEEDS to lead in AI to protect our future), but I think that the ultimate threat is the complete destruction of virtually all of our economic models of human activity. The whole idea that you go to school, work hard, learn multiple skills and then find employment using those skills for an employer ... if a collection of AI models works better than any human even after years of training, what will people do? People need to eat and to live somewhere. What will they do to earn these things? Will Meta or OpenAI buy them for you? Doesn't seem like their style. This will all be a lot of fun until it isn't. The social unrest could be worse than anything we've seen before.
She kind of called it. Governments need to own the AI outright, then do what’s necessary to socialize the benefits of automation. AI completely destroys the social benefits of capitalism. If this government won’t do it, then people should start funding non profits that are legally obligated to create AI that benefits all
@@justinmallaiz4549 they weren't smashing printing presses, they were sabotaging mechanized spinning machines and the like. They were furious that they were swiftly losing their livelihoods without any recompense or social programs, just tossed aside callously without intervention in order for factory owners to reap the benefit. And they were right. Industrialism was beneficial in the long run, but it came at the cost of unfathomable human suffering. We should learn from our mistakes rather than repeat them. People losing their jobs to AI should be helped, not discarded as wet paper in the gutter. The capital ownership of these new models of production should be as spread out as possible, not concentrated in the hands of a few dozen utter maniacs to whom human lives are irrelevant.
that's why human has to become robots itself in a decade or so with built-in AI, otherwise worth of any humans and the amount of ask completed by humans will look like nothing by comparison . that's why the investment should go toward the digitalization of humans from higher capacity storage in Petabytes to uploading or downloading minds ; it's become more relevant if AI become sentient by the help of 3d Data via robots and we are replaceable even in real body requirement tasks in 10-20 years
I'm bookmarking this so I can come back "in a few years" to check whether AI is "more intelligent than everything and everybody else on the planet". How's ten years for "a few years"?
@@rolyars That was a myth based on the belief that AI would continually be trained on really bad internet data. AI is being trained on itself and simulations, which is faster and better.
@@rolyars Take a look at the big picture: single cellular life forms, multicellular life forms, mammals, humans, society, magical boxes (computers) that can simulate all aspects of reality, pattern recognising algorithms within said boxes, and so on, and so on. Everything within exponentially shorter time spans like years, months, days.
I think you're confusing what politicians say with what the people behind the politicians know. I garauntee that the people who actually are in charge of the governments around the world know that the race for AI is the race for world domination. Politicians don't actually run countries. They're just a user interface.
In the case of Joe Biden, that premise holds 100% true. He could only have been more of a robot if he were completely dead, instead of only mostly dead.
Pretty much, though the west is definitely lagging far behind on developing a nationally funded AI system. I guess the US is basically a hollow shell at this point since the vast majority of the government has been privatized over the last 35 years. It's essentially just mega corporations wearing a government trench coat at this point.
The real risk is that AI will still be stupid (and will always be stupid ) , but will increasingly be put in the position of making critical decisions.
Don't worry, ecological overshoot will get us way before any AI shenanigans. In fact it might contribute to that, not because it will do some terminator shit, but simply by how much energy we will put into it, and our problem today is using too much energy, and we are trying to use even more energy so the AI will tell us how to solve the problem of using too much energy XD.
@ There is a long way until it is superintelligent though. Large language AI models still fails all the time. Though it is a good tool, but quality is not good enough. And if the AI starts to learn on AI generated material, then it may paint itself into a corner.
The leading models do not have a monopoly, and open-source alternatives are only 3-6 months behind, a gap which is steadily getting smaller. Competition is key for free general AI that doesn't END EVERYONE. If the only general AI in town was supplied by Uncle Sam, you would achieve the bleak future you are trying to avoid. Imagine your political opponent being in charge of the only superintelligence on earth. Free and open market competition is the only way to achieve balance and avoid the AI apocalypse.
Exactly. Sabine this time was a doomer and, essentially, gave a fascist speech about nationalization. As if a rich guy could be more dangerous than the government.
Yes, and those models are often much smaller and can be run on consumer hardware. Data centers and large models will still be relevant but will not take over everything. Sabine needs to calm down.
This video is one of Sabine's best ever. It truly captures the desire for power seen in many billionaires, especially AI billionaires these days. This also aligns with a school of psychological thought often overlooked in socio-psychological discussions: Alfred Adler's power principle. Alfred Adler saw the "will to power" as the primary driving force in human beings, particularly males (though applicable to females as well). The fact that a large number of people actually agree to it (see, for example, the USA today) is extra concerning. What is even more dangerous is the inevitable use of AI for the military. There is indeed a 50% chance of humankind as we know it, being destroyed by AI. It is even more frightening, when you listen to today's announcement by Trump to set up a "STARGATE AI" initiative (investing an incredible 500 billion dollars). Why is it so frightening? ... since it look as if fiction becomes reality right in front of our eyes. Do you remember the name of the super AI that will destroy mankind with their Terminators? - SKYNET
Its not about highs of intelligence. Its all about automation of intelligence. As soon as the rich can scale general intelligence with processors and graphics cards, its over for common man
More than that. It's about control of intelligence - via data and monitoring. OpenAI is releasing products that literally take control of your OS. Think about that for a moment. Totally not a giant red flag of security and privacy breaches. These machines are not some superintelligent gods and they may very well never be. They are however are very data hungry and require our data to work. Guess who else wants your data? Have fun selling your life away and still paying these tech bro goons.
That assumes either that AGI will have very low energy efficiency potential or that no open source replication will be achieved. Good luck protecting your intellectual property rights when the entire economic future of a state is at stake.
I’ll use AI to part my bills? 😂 This is the silliest thing I’ve heard since the 1970s when everybody used to say that personal computers would be used to store recipes!
Sure it can, in an indirect way eventually. Imagine if things keep progressing at this rate. Eventually there really will be no need for 99% of people to have jobs anymore. This is where concepts like UBI come into play. If this happens, in a way, AI will be paying your bills.
@@__jonobo__ Some people probably do. The thing is, in the 1970s, “storing recipes” was one of the very few things they could imagine personal computers being useful for and the ONLY reason why “mom” would be interested in having one in the house.
There's a difference between what politicians are telling the public in speeches and what they actually believe privately. I think given Stargate its pretty clear the US is aware of what the game that's being played is actually for.
But seriously, there is still no moat with current AI techniques. They're all trained on the same internet, if there's a breakthrough it is unlikely to look like a further turn on the crank of transformer models.
@@SabineHossenfelder Excellent video. But I just recently watched your video on transathletes. You based your view on a few studies and not on a review which would have unmistakenly told you that testosterone isn't banned for nothing in sports. Even intramuscular coordination is better in males meaning even matched for muscle size a male muscles is stronger than a female one.
What everyone gets wrong is no company will end up with the power but the systems that they create instead. Controlling a super intelligence is like expecting a dog to control their owner.
Good thing superintelligence is not something that comes from LLMs with chain of thought or RL. Saying stuff like what you just did is a walking advertisment for OpenAI.
Exactly my thinking. There's short term employment disruption. But what's most dystopian is when we create AI systems that we don't understand how they work or hallucinate We also neglect to understand different behavior and poisoned test data. Robotics papers have shown our systems and especially in AI are also (unintentionally) racist. If what we train them on is the worst of humanity, that is the product we will get. So. Probably they'll just create things like government and healthcare policies that are biased in ways you can't measure,nor do they care. Once these systems are in place, it would be like trying to get rid of red light cameras (another clear failure in terms of innocents and accuracy). The AI systems would control everything in their separate silos. Supermarkets, economic movers, employers, government systems, traffic flow analysis, police systems (facial recognition etc!)....
That is correct. It all comes down to who controls and is allowed to benefit from the work that shapes the world. That is why need an open and transparent AI-, energy- and -natural resources common supplying everyone with production capacity dividends.
Meanwhile at Davos: - We should tackle climate change seriously. - But AI needs a great deal of energy to operate. - Ok, ok...maybe climate change is not that serious after all.
AI can and will solve many of the problems that are associated with its development. Climate change is happening either way, yet development of AI is our best opportunity for solving the issue among many others. That being said, this is a terrible truth in the grand scheme of things because the risks of AI/ASI development are boundless and entirely inconceivable.
It is still said that 'there is no moat' and open source models are only months behind closed source models. But the highest amount of of intelligence will still need a lot of data centers so that is what Europe needs to build in any case.
More and more regulation builds no data centers. Lack of energy won't power said data centers. Europe is lost with its current leaders. But at least we have the moral high ground.
yes, you are reading the situation correctly. Open models are like 6 months behind closed models, and at a sufficient capability they will be able to catch up quickly too, I think. The most important thing is computation power that will allow this
O3 is likely an iteration on o1. R1 is the stepping stone for even more innovations openly. So o3 will not be a major game changer, but r1 will definitely be.
R1 is open model but not open source, the training data isn't available anywhere. But this model can be used to train other models so in a sense the playing field is made a bit more level, at least the baseline starting point. But Ph4, llama models, qwen, mistral etc. are all open model. R1 is just the first reasoning model that's open model, still a huge milestone though.
I see your point. Here are some thoughts to consider: Open-source AI models can indeed run on personal devices, and many cutting-edge ("frontier") models remain publicly accessible. Users can freely adopt and adapt these tools, but it’s worth recognizing the investors or organizations that funded their development. Without their initial risk and investment, such innovations might never reach the public. The bigger picture isn’t just about systemic divides (governments, wealth gaps, etc.). Individuals also play a role: By supporting projects like OpenAI financially (as investors) or ethically (as advocates), you gain dual advantages. As a user, you access groundbreaking tools; as a stakeholder, you share in their success (e.g., profits, influence, or societal impact)
Really well put Sabine. I don’t know what will happen, but I’m sure I’ll study Computer Science to understand the mechanics of AI. Worst case scenario I at least understand how our power lords function.
You should do a mathematics and neuroscience since, in a nutshell, on a practical aspect, current Deep Learning is modeled as a mathematical function, based on statistical rules, that maps a defined input space to some output space where the input spaces might be set of functions, and such mathematical function is somewhat representative of human's neuron although there's more intricacies behind that. On a theoretical aspect, higher inquiries into Deep Learning would be like relating theoretical concepts in math like topology, measure theory, and functional analysis to neural structures. From my experience, CS won't teach you much about the fundamentals of AI. In a nutshell, mathematics is like the "psychology of reasoning and abstraction" whereas neuroscience digs into the observable empirical mechanisms of reasoning and abstraction. If you look at mathematics little by little, you'd notice that many theories in mathematics deal with the very "intuitive ideas" that humans seem to have but most aren't really aware of and bring those intuitive ideas into computable structures.
tiktok is almost close to perfection for many people nowadays as you can tell, it has taken over all the zoomers and all the boomers. The only improvement it would be to scroll it with your mind, so there's no physical effort like at all.
The computer was supposed to revolutionize work. A promise of shorter days, and less work to do. That didn't happen. AI will promise many things but deliver on none.
Honestly I dont unserstand how did we go from having massive ammount of people extracting and manufacturing resources to a world in which not nearly as many re needed to produce our needs while at the same time increasing work hours in many places in the world and reducing pay.
Yes, finally someone but myself with a long-term memory. Called it out in the comments also. Ganz ehrlich, das ist ihre Masche. Die will uns provozieren. Funktioniert immer wieder. Talent als RUclipsr hat sie. Als Wissenschaftlerin? Naja.
As the months past people are waking up As a software dev it's been a funny process. In the beginning almost no one used it (gpt 3.5 days). GPT 4 came out and still most criticized it for "bad code". As models have gotten better and tooling to use it more effectively has gotten better its rare to hear a software dev say they don't use it at all. Now it's "yeah I use it but it won't replace me".
@@otheraccount312 I'm a dev and I think (very) soon we will be "replaced". Atleast what we do will change. We will still be able to give it commands for desired results but again soon enough after that, that function will be replaced too. Enjoy while it lasts :) Core problem is, people think AI is just a tool. It is a thinking algorithm which can optimize its algorithm as needed - something like human brain itself.
@@otheraccount312 same here. search engines are so shit that I use chatgpt to find documentations and at that it's pretty good. but the code it writes, even if I only ask it to make something well-defined and specific, often doesn't work.
This video is kinda funny given that a chinese company (deepseek) has just publically released a model (R1) on par with gpt-o1 under an open source license. China understands the power game, and they're giving it away for free!!!
I mean the complicated thing about monopolies is they can lead to a reduction in the overall cost but it has the double-edged sword of by most of the time greedy people are the ones who established monopolies and most of the time it doesn't actually decrease the cost of anything to the consumer. Competition between companies is what actually gets prices lower. Usually
@@borttorbbq2556 Correct. Is decreases cost for the consumer at first, which is why it becomes a monopoly. When a company has become a monopoly, it will use those cost savings to further increase profits instead. This is a well known strategy mentioned in many books. This why companies actively try to become monopolies.
@@borttorbbq2556 getting things cheaper is not intrinsically good, because the competition that gets things cheaper, involves cutting more and more corners and figuring out how to externalize costs as much as possible, which eventually destroys the basis for all life.
@@mitkoogrozev that can happen.. but I'm not talking about cheap stuff. Because something can be inexpensive but still not cheap. Like for the most part I don't like buying cheap stuff I'll usually only do that if I just need something for like a one off throwaway purpose. I have bought stuff that was genuinely super inexpensive like lowering costs and things that I would call cheap yeah they were pretty darn good quality like the quality of something that is many many times the cost of it.
“Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.” - Frank Herbert, Dune
For governments it will become "indispensible", "they wont be able to compete" .... meanwhile 80% of government in my country still running on windows XP and paper data archives... i think you overestimate the expenditure government is willing to spend on upgrades... or underestimating their unwillingness to change anything, for any reason.
Even a “Queen of Europe” with a well meaning public interest vision won’t help unless the resulting initiative(s) created are efficient and directed, and public funded projects of that scale very rarely are. If you assume China has the vision and means where those western counties don’t, perhaps that’s due to their regime and political system that western countries don’t have? I don’t think you identify the best way forward for the public interest and perhaps don’t give enough credit to those leaders: They have to convince a lot of people with a simplified message in order to create policy to direct the vision. It’s messy and hard. I’d argue the UK has the best approach; lower the barriers to startups creating or leveraging AI so that the massive incumbents don’t create a legislative mote because the technology itself is understood and doesn’t have a proprietary tech lock in mote, it’s just money for creating models and legislation to limit incumbents because of safety concerns that could lead us to the corporate dystopia.
If you use NordVPN, then NordVPN potentially sees all you do. Your network provider potentially sees less. In addition to the companies that provided the software on your mobile phone and/or the service providers you use. Not sure if that’s good or bad. Circumventing Geoblocking? Definitely.
I've had a vpn for over five years. Last year I just didn't renew it and haven't had much problem without it. Besides geoblocking, is it still a good idea to get a vpn?
@@FlightSims No, it is not. OTOH, geoblocking is still a thing, and some of us nerds love to run our own VPN servers all over the place for sh*ts and giggles, so I don't know… BTW, IP geolocation/geoblocking and circumventing it with VPN is often bullsh*t: 1) E.g., I have a VPN server that is physically located in Portugal, but Google/RUclips and Spotify place it in Britain - just because the company that owns the data center is registered in the UK. But it's kinda funny to listen to Tesco ads in the Bri'ish accent when listening to Spotify (I'm an American). 2) Some geoblocking sites track the IPs of well-known VPN providers like NordVPN (or even data center providers they don't find trustworthy) and flat out refuse to serve you if your connection comes from one of those. So you need to choose your VPN wisely.
@@FlightSims Tom Scott said 5 years ago, "The best choice for gay people, pirates, assassins, and gay pirate assassins." Basically, if you need to hide info from your network admin (your church, your work, or your parents), want to pirate media, are planning to kill someone, or all 3, VPNs are useful
I hope Deepseek will keep doing what they're doing. distilling small models and enabling them to use reinforcement learning doing wonders and now I can use a pretty powerful model locally. the deepseek R1 32b qwen model are really good, at least for coding it's better than 4o
Open source will not change the doom trajectory. Yes, you can run Deepseek on your computer, but OpenAI & co. will be able to run similar models with 100000x more computational power than you. So your model will be crushed, and won't be competitive on the market. That's why they are investing trillions in infrastructure.
There may be open source models, but are there open source companies? The companies are making their models open source because it is their way to get publicity. In the long-term, they hope to earn money. If the big players have unlimited resources, then the small players will give up. Maybe it will turn out a bit like with Amazon. Amazon had enormous resources from capital market and didn’t pay dividends at all.
Or any man made tool. A knife can be a useful tool, or a deadly weapon of oppression. An aircraft can deliver you to a holiday, or bring aid to an area in need, or drop deadly weapons. The list goes on.
I think there's a difference. It's hard for a private person to make a nuclear reactor. But any billionaire can create a data center and set a hundred million bots loose on the internet.
@@andreisopon4615 companies in very poor countries operate bazillions of scambots already. You already can buy software that can create social media accounts in bulk, and operate them on mass scale with "human like interactions" as advertised on their page. Adding Ai will make them even better scambots.
There is a big assumption that AI will be useful on every sector. This assumption is based on the promises of AI companies themselves. I struggle to understand how that can happen but let's see, I can be wrong
It was the same with early internet, once called a passing fad by Paul Krugman. There was a bubble, but it fundamentally changed the world. AI is considerably more important. For instance, you don't know if I'm an AI.
@@randomnobody8770 I think a lot of these government leaders have never used ChatGPT. If you use that, then you know it's game over for humans. The Turing test is easily passed now, yet humans keep moving the goalposts.
what you mean assumption? its simple logic, if u train a computer that behaves like a brain in an specific task to recognize patterns inside that specific subject, then, if a human can do it, that computer will be able to do it aswell but, trillons and trillons of times faster, in parallel, even if its not super efficient at it, the amount of times it can do a task and the parallelism will outperform humans, even if the whole population of earth was doing the same task non stop for days EASILY by a huge margin... it doesn't need to be perfect, it needs to be good enough for that task.
Being first will not guarantee permanently winning all the power, because the "no moat" condition still exists, and being second is substantially less expensive than being first. Apple was first in cellphones, but has a substantially smaller market share than Android worldwide. Also China has fully realized the damage their failure to compete effectively in the OS wars has caused them, and will not make that mistake with AI.
Well, DeepSeek v 3.0 changed AI landscape totally. It is developed by a small Chinese company with 100 people with a fraction of the cost. It performs comparably or better than Chat GPT. It is open source and free. The hype about AI by big US tech companies look like a joke in comparison.
Probably, present AI will deliver but in a more limited way than people expect. You're programming a machine to do things from many different view points where as a single human may use only a few view points. So Instead of going to specialist after specialist it can all be done in one event. A bit like a machine that speeds up calculations but can tell you you need to eat more vegetables and don't forget to put the bins out. However, any step further comes down to making decisions related to humans and this would need to be programmed in..... by humans who would be paid by people with their own interests. A plethora of si-fi writers recognised this years and years ago. The result of the final steps would be towards either world peace, world domination by one source, conflicts between differing sources or human obliteration, depending on the programming. So like the knowledge of nuclear reactions it can be used for good or not, and there's no way of stopping the AI development. Good luck world.
Open source LLMs are very close to those of closed companies, for example DeepSeek-R1 was just released. The future isn't one company, it's millions of people collaborating to build the future, no matter where they come from.
You only reference large models in the video, but the biggest practical applications will use smaller models that are optimised to run on constrained hardware and that are more specialised. For example, if we want to have physical helper type robots they will use these type of "smaller" models on premise/on device. That being said, if you want to develop medicine or discover formulas you will need datacenters and large models
I personally disagree it is too late for a new company to win out as the science behind AGI is fairly green. LLMs have not been proven to be the only piece to love forward, they are one piece of many future components. Additionally, if high quality smaller models connected together become the proper path, then it is an open field. I do agree about world dominance of power and greed driving development.
Just 1 quible. Sorry. At 5:38 you define the palantir as the "seeing stones" of LOTR, whereas Tolkien's plural of palantir is, I believe *"palantiri*...Apologies again.
This is 💯... And at 05:36, she talks about the Lord of the Rings trilogy 😬 Obviously she never read the 6 books... So how does she even know about the Palantiri?* I suspect she was deliberately misinformed by her AI assistant, who got false information by spying on Peter Jackson... who in turn got wind of it and changed the AI's mind by seducing her or offering her a leading role in his upcoming new film... About the life of Warner Haisenburger, a poor emigrant from another planet... and maybe a little bit under the moon. ... And I think to myself: True stories are so much better than fantasy novels.
By the way, bro... Did you also notice that the censorship around here really sucks a lot of the high quality comments out of the traffic?* @@ListenToMcMuck
Sabine is spot on! ASI (Artificial Super Intelligence) is a few years away (4, 10, 12?), not decades away. The company/partnership that first creates it WILL "win the world." China knows, but China is authoritarian. When she says AI will be like Operating Systems that we'll use to access everything, her example is of Chatbots only, but simple AI will be in everything in 5+ yrs (refrigerators, cars, credit cards, robots, lawnmowers, toys, dolls, cameras, ATMs, traffic lights, etc.), just as CPUs are in everything today.
I disagree. AI is mostly not software. Yes, some companies are ahead of the rest and have better software, but AI requires chips and electric power. which are tangible assets that can be confiscated by the state. you cant backup datacenter or nuclear power site that these companies require to create AGI/ASI. There's also third component - access to training data, but this resource was already mastered by most as most data is on internet for free.
@@acakeshapedlikeatrainonatable any computing architecture that deviate from the old classic transistor will play no future in the development of ai. energy is the main bottleneck currently. chip making is the most advanced technology humans ever thought of ai will not play a major role in that for at least a few more years. The only things that really matter is new research, chips and energy.
von der Leyen is such a joke! :D she is talking about the importance of the race though the EU shot itself into the knee direct at the start with their regulations! :D
I have the greatest respect for your opinion and you are very brave to assume you know that the current pattern of AI development leaves everyone worldwide in thrall to one or other "Bond villain" megalo maniacs and that AI has only one kind of super brain holistic purpose and that is to be a tool for economic power. Yes, there is a dystopian sci fi driven perspective projected by the current situation but it's easy to see that the huge costs of building and running cutting edge technology is inevitably sucking in money to the detriment of everything else inevitably looks like it can only end in a power grab. If you don't plug in, if you don't subscribe, if you develop your own superintelligence to meet your own needs and you don't harbour aspirations towards global control, what then? I know that appeasement of great power has a history with some pretty tragic consequences but typically having useful, exclusive abilities makes you friends. By the way. It is ALL about profit because that's the only language that power understands in 2025. As we now know, as we've all secretly known, you get the democracy you pay for, you get the judiciary you pay for, you get the scientific papers you pay for. AI might look like science to us but to those seeking power it's all accountancy.
What I am kind of waiting for is a Sabine that asks her viewers to start caring for each other. So far, I have gotten the impression that she is actually rooting for a world in which people are doing well, and not one where they are dominated by few singular entities. A different world is possible, but it requires that people actually behave differently on a micro level. Musk and others are not independent from the masses but they are a result of how people behave individually and towards others.
I think that you are selling her short. I did the grad grind and met Nobel prize winners, so I feel like I know where she is coming from. She is earning a living and explaining important technical news. You can’t run a channel where all you say is “Be nice to each other”. No, it’s the power people that not telling you what you need to know, but rather what you want to hear. They are in every human endeavor. Even with a PhD in physics, this is the only physics channel that I listen to regularly. Dr Hossenfelder is a thoughtful, caring person because she fills an important role telling us what we need to know. That’s how I see it.
@@edwardlulofs444 What I meant is that I believe that elites like Musk are not in their positions solely because of their particular skills (if they have any), but that the existence of these positions is an emergent property of how all humans, or at least a critical mass, behave at the micro level. If people were to change the rules that keep the game going, it would mean a fundamental change in how the world works. I don't think it's selling short to ask a person who has achieved great authority in the field of knowledge to demand such a change from her audience. I also doubt that Sabine would be concerned about her RUclips channel if the alternative was a world where people had fundamentally changed their behavior, nor that running a RUclips channel would play a very important role in such a world. She talks about some companies trying to achieve world domination. What’s the alternative? I think it's appropriate to discuss Sabine’s potential influence at this level.
I agree that this is a huge challenge and that my current home continent of Europe has gone (at least partly) down the road of decade's of squabbling over sizes or pie slices that they are refusing to see that their pie has gotten much smaller. Still, the job of those CEOs is to secure the future of their firms (for their shareholders/owners) and that is a more immediate challenge than who will "rule the world" five years out. (Let's remember Steve Jobs died at 56 and all these guys have an end date that isn't secured by wealth.) They're doing what they need to be doing. So perhaps the democratic response isn't only to have the government involved, which it is on several levels, but to have more of the electorate holding equities and perhaps boards with better representation of numerous small share holders. The boards are elected by the shareholders and they can remove a CEO (See Steve Jobs, again.) I'm not suggesting this is the ultimate answer, by any means. Sabine just got me thinking.
Yes, the pie is smaller - because we ate a substantial part of it without baking new. And yes, I think the stock markets have a word to say...And yes, happily the US presidency ends after four years and human´s lifespan after some decades.
What does that matter when a rogue judge, presumably paid by Joe Biden can stop a CEO - Elon Musk - doing his job @ Tesla? A judge in bumfuck Delaware just says no and you are out of the loop. Why should anyone want to attend those boards anyway?
@@thomasjgallagher924 Shareholders are those that own part of the company Stakeholders also includes those that are interested in the company's success and activities that include employees, customers, and the public.
Kill all? Aviation was a great innovation. Should there be no laws governing how aircraft are built and operated? There are moral and ethical questions that need to be answered. If AI trains on the works of authors, musicians, commercial product IP etc. and produces derived content - who gets to monetize it? If some AI algorithm makes a decision with disastrous consequences - who is responsible? Who oversees it? Where are the checks and balances?
@@runmarkrunheinrich Should the wright brothers have been barred from constructing the first aircraft because someone in the far future could have theoretically flown a plane into a couple of towers, killing many people as a result? Oh wait...
@@neptunianmanBarring and regulating are two different things. If airplanes were unregulated, we would have many more disasters killing way more people than the event you mentioned
@@juliansebastian Regulation this early on in the development of AI is equivalent to barring competition. Why do you think the largest AI companies are pushing for regulation, because they're benevolent?
These people have done a thoroughly good job of confusing what thie technology is, but Sabine Hossenfeld, none of the things being promised are going to happen in the next few years or by 2030 and had we any evidence for this I wouldn't say otherwise. A chatbot is not evidence that an AI will be able to do all of the things a person does. We're so over impressed with how these things work that somehow we're confusing whats being promised with whats actually being delivered. Theres not a shred of evidence that these models will be "more intelligent" than all of us and capable of doing the things we hope. Theres tons of fun, engaging and wonderful examples of these things parsing billions of text files faster than a human ever could -- while stealing from actual people in order to "learn" what they do. Its a solution without a problem. Every leader is getting this wrong because its the finest snake oil in the world and they're not smart enough to even question it.
😆👍 Couple of weeks ago, she made a video AI had already reached its generative peak. Means investments are money burned. She forgets fast.😀Danke, der_kleine_Toni! Vielleicht mache ich selbst ein Video darüber! Sabine haut die halbbackenen Videos raus da kommst du mit den Responses nicht mehr hinterher!😁
Queen Sabine would be many things, but at least she's rational. That's a much needed trait right now, especially with the big names we see today, who all clamber for "truth".
Robin Hanson, a professor of economy, wrote an essay like 10 years ago, about what he called ems, for "brain emulations" : it's sort of post-singularity super fast super-intelligent AIs, and he described that as soon as these would appear we humans would be relegated to what Neanderthals were relative to sapienses, or probably worse. That would be the worst case scenario. The best case is that humans, or at least a part of humanity, gets to be able to ride with the AIs, and to co-exist with them, even maybe co-evolve with them. This supposes a physicalist-functionalist belief (which makes sense even if not certain) that there is nothing not replicable, not "surpassable", in the human brain-body complex, so that AIs can and will surpass us. In that case, the end point is that humanity somehow changes its nature, gradually becomes digital, and probably with a sort of collective and relativistic "operating system mind" allowing long distance space migration, probably with some bio-versions as temporary technical support! Like by 2100 ? of course, in between stocks of nuclear weapons can get in the way and explain the Fermi paradox, faster than climate change…
TBF open source models like DeepSeek R1 are not far behind OpenAI's frontier models, and the gap is closing. OpenAI do not have a monopoly on AI intelligence and they know it, which is why they're pushing so hard for more compute capacity
Maybe but models seem pretty close to each other in capability and open source is not far behind the frontier - this week’s sensation is DeepSeek R1, a Chinese, completely open source model comparable to OpenAI o1 - so a only few months behind OpenAI but much much cheaper and which you can run locally if you have the compute (a stack of Mac minis). Also governments can nationalise (buy) companies they really want or control them with legislation, not that this is a panacea in the face of ASI but governments do have powers.
When we can feed an AI all the data we had in the 1900s and it comes up with the Theory of Relativity from that data, i will take AI and its potential seriously.
I take it serious now. And not because it is smart. But because it is smart enough to cause trouble. You do not need an AI that can come up with Theory of Relativity for it to be used as a weapon, or to scam people, or to spread miss information, and so on. And I feel that this attitude of waiting to the AI models become "Good" make us far more passive in how the handle the issues we have today. It seems like people feel they should not act until we have a rogue Skynet on our hand, but we have issues today.
You're so right about this! Unfortunately, governments are only in a position to react nowadays - they do not spearhead this technological breakthrough. I'd say it's unprecedented in modern history, and tech companies hold the power that makes East India Company pale in comparison. I also think it's impossible for the EU to catch up, even if they pour all the money dedicated to research into it. Just because of the sheer magnitude of concentration of wealth in already highly-specialized tech companies.
An aspect of AI development that is often misunderstood is the the concept of commoditization of AI, as soon as you create a breakthrough other companies can just see the type of outputs you are getting and replicate or create a similar process that will create those types of outputs
Paul Krugman once called the internet a passing fad. Its impossible to understate the power of AI at this point. It takes 18 years to educate a human to a modest level. AI is improving monthly, and can already outshine an increasing number of workers in some domains.
@ one thing i do bet ai will get good to a point then incremental improvements will cost orders of magnitude more computations / data to improve even .01%.. also ai in its current state lacks alot of context as it is trying to do too many things. also this shit might not even work as advertised in the time-frames given if even at all.. i mean hell we have been hearing fusion is 10 years away since like the 60's
we need to push those companies to make it open source. why are we letting them use our data on the internet for training without our consent and want us to pay to use it?
Meta Llama models are open source. Chinese company Deepseek recently released their open source, open weight models and they're as good as ChatGPT o1 at 10% of the costs. Open weight models are more "open" than just open source has it shows you how it's configured. They just released their open "reasoning" model as well, which can ask/test/show you its train of thought.
Deepseek which came out a few days ago proved you wrong. People from China made it open source and it performs as good or better than the "frontier models".
They (the rich) are building the skynet and terminators , who will be your slavers and prison watchers. The dumb peasants klap and cheer for now. When common people will realise whats going on - it will be too late.
Im a physician.... I have to do a bunch of manual, clerical and technical work.... AI in this moment cannot do neither manual or technical of my work, but because it can fill paperwork better apparently hospitals want to get rid of us... If all of us... from accountants, doctors, lawyers, etc are replaced who the fuck is going to run the economy?..nobody has jobs, nobody has money who is going to keep giving money to amazon, google, apple, etc?.. for me this is fucking bullshit and AI should be restricted to research, simulation and replacing very dangerous jobs like working inside a nuclear reactor or the bottom of the sea...
People think this condition will force government to transit the economy to UBI, and money will stop being as important or something. Personally, I don't know if it'll happen, but if it will, I'm not worried about the end result, I'm worried about the transition itself.
The risk is that more clerical tasks will be pushed onto people who have other specialized expertise. AI is going to replace all our clerical work... and then when it doesn't, who is left to help out? it will be all your job and then your bosses will wonder why your productivity tanked
You mean the police? Security Guards, which are often enough just off-duty cops? The Pinkertons? The Justice System has always been two-tiered at best; anyone studying sociology or criminology or even statistics could tell you that. Been that way for a long time.
AI-related worker here. More than putting money into a frontier model, I would put public money to make AI-frontier resources available to the general (researcher) population: computational and labeling power. In this way, we can have open models that can compete with private models. Right now, OpenAI ways of doing things is: read papers from Arxiv, do them bigger. Guess what: we're SOTA now
As a software developer currently working on implementing actual AI-features in our software, I disagree that the "frontier models" are (almost) always better than smaller models, barring niche applications.
In many applications, using other small and freely available models, and tuning those according to your needs often yields better results than using the latest "big" LLM. Plus, the computation is much faster and less energy consuming; both things should not be underestimated when thinking about scalability.
You must not know how corporate america works. Eventually, your smaller models will be brought out by the company owns frontier models, the code is buried and products have no choice but to switch to using these frontier models. And if everything else is already using front tier models, then to make the code maintainable, everything will be moved to them.
I also read a memo supposedly from Google engineers in which they expressed their concern that big companies really had no "magic sauce", implying that anyone with some GPU power can set up models relatively quickly. I guess they might be worried investors all betting on few big companies figure that out as well at some point. Would you agree with that?
ollama Server FTW, They can't have my data.
@@Caledoriv tbf the scope of the video as most AI talk at surface level is was directed at "General Intelligence"
@@rolyars the only difference is speed...you can run a small model on almost anything now it'll just be slow, and Nvidia's "DIGITS" (AI minibox for desktop use) could make huge leaps in that space...they're going to be very hard to source for quite a while I'll bet!
It's weird seeing the dystopia unfold in real time.
I don't think it'll be dystopia or utopia, but humans will be better off overall
Apocalyptopia... lol
It'll be manufactured scarcity to maintain the rhythm of the worker-class drum.
Ho NO, i will miss the utopia we currently experiencing 😭😭
for real
Exactly!!!
Poor man wanna be rich
Rich man wanna be king
And a king ain't satisfied
'Til he rules everything
~ Bruce Springsteen
Thats valid for Springsteen himself?
Great song
why wouldn't i wanna be a king after getting rich?
so weird. i just want to help people. donate. be happy, live my life. try to make others happy too.
is there something wrong with them, or me?...
@@differentone_p With them, but unfortunately people who think like you will never chase riches big enough to be king, only the greediest and sociopathic individuals who can never have enough and will never give to others.
Some insane people have the inner experience of owning and controlling all of existence. They often report that is not working out for them.
I don't think the hegemony of frontier models is sustainable. The Chinese have just released 'DeepSeek-R1,' which competes with OpenAI's Frontier O1 model, and they have made it available as an open-source model.
It's free, but not open source. You can't modify the code.
@@honestlocksmith5428it's an MIT licence but most of the work is how you interact with the model and the weights and they've released both.
@@honestlocksmith5428it’s under the MIT license, you indeed can modify the code.
It's not the model that matters, but the supercomputer that powers it
@@honestlocksmith5428 "This code repository and the model weights are licensed under the MIT License. DeepSeek-R1 series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. "
What that British guy said about importing ai makes no sense if open models are available
I'm a bit confused. Sabine recently stated that AI has hit a wall and is overhyped, yet now she claim that AI companies will dominate the world in the near future. How to reconcile these seemingly contradictory ideas?
I am similarly puzzled, confused, and surprised.
I think it's exactly that. Ai is over hyped.
It gives a lot of applications, but specialized solutions or problems are not solvable.
You need a lot of energy and computational power. Therefore, states can control this technology by simply controlling the hardware
Both can be true to be honest. Because even though a wall in the funtionality an trueness of AI can be hit, the level that is reached can be used by companies to dominate.
Good point
Both made you click didn't they?
And all these tech billionaires are ruthless, narcissistic, manboys…what can possibly go wrong.
Manboys, haha, yes I also would prefer Queen Sabine.
Be bad 😂😂😅😅
No please no queen Sabine. Even worse than Uschi.
tech billionaires are welfare queens
Come on. You wont find better psychopaths anywhere.
These speeches likely signal we're at the peak of inflated expectation and heading for the trough of disillusionment.
For sure. They seem completely disconnected from reality. Even some of Sabine's arguments seem too charitable about the capabilities and impact of AI.
AI is going to destroy more or less every job there is. That's the disillusionment.
Peaking real hard. Sabine is throwing around “super intelligence in a few years” in this video. I’m not seeing products that are any more capable of doing my job than ChatGPT was when it first came out.
@@brianskog9947
You're not looking hard enough then.
Said by someone with next to zero knowledge of the technology.
You did a good job of pushing back against the nationalistic approach to dealing with AI (our country NEEDS to lead in AI to protect our future), but I think that the ultimate threat is the complete destruction of virtually all of our economic models of human activity. The whole idea that you go to school, work hard, learn multiple skills and then find employment using those skills for an employer ... if a collection of AI models works better than any human even after years of training, what will people do? People need to eat and to live somewhere. What will they do to earn these things? Will Meta or OpenAI buy them for you? Doesn't seem like their style. This will all be a lot of fun until it isn't. The social unrest could be worse than anything we've seen before.
history rhymes. Isn’t this is what luddites said while smashing printing presses during the industrial revolution
Tendency of the rate of profit to fall
She kind of called it. Governments need to own the AI outright, then do what’s necessary to socialize the benefits of automation. AI completely destroys the social benefits of capitalism. If this government won’t do it, then people should start funding non profits that are legally obligated to create AI that benefits all
@@justinmallaiz4549 they weren't smashing printing presses, they were sabotaging mechanized spinning machines and the like. They were furious that they were swiftly losing their livelihoods without any recompense or social programs, just tossed aside callously without intervention in order for factory owners to reap the benefit. And they were right. Industrialism was beneficial in the long run, but it came at the cost of unfathomable human suffering. We should learn from our mistakes rather than repeat them. People losing their jobs to AI should be helped, not discarded as wet paper in the gutter. The capital ownership of these new models of production should be as spread out as possible, not concentrated in the hands of a few dozen utter maniacs to whom human lives are irrelevant.
that's why human has to become robots itself in a decade or so with built-in AI, otherwise worth of any humans and the amount of ask completed by humans will look like nothing by comparison . that's why the investment should go toward the digitalization of humans from higher capacity storage in Petabytes to uploading or downloading minds ; it's become more relevant if AI become sentient by the help of 3d Data via robots and we are replaceable even in real body requirement tasks in 10-20 years
I'm bookmarking this so I can come back "in a few years" to check whether AI is "more intelligent than everything and everybody else on the planet". How's ten years for "a few years"?
Honestly 2 or 3 years wouldn't surprise me.
Depending on how you define intelligence, I think, the frontier models already passed that bar.
Honestly isn't already slowing down significantly? The first releases were spectacular but now it seems increasingly hard to perfect it.
@@rolyars That was a myth based on the belief that AI would continually be trained on really bad internet data. AI is being trained on itself and simulations, which is faster and better.
@@rolyars Take a look at the big picture: single cellular life forms, multicellular life forms, mammals, humans, society, magical boxes (computers) that can simulate all aspects of reality, pattern recognising algorithms within said boxes, and so on, and so on. Everything within exponentially shorter time spans like years, months, days.
I think you're confusing what politicians say with what the people behind the politicians know. I garauntee that the people who actually are in charge of the governments around the world know that the race for AI is the race for world domination. Politicians don't actually run countries. They're just a user interface.
Elegant brilliance! Best description of politicians ever!
In the case of Joe Biden, that premise holds 100% true. He could only have been more of a robot if he were completely dead, instead of only mostly dead.
@davidkachel Give him a break. He's been mostly dead all term.
Don't you mean "Useless Interface" ? 😂
Pretty much, though the west is definitely lagging far behind on developing a nationally funded AI system. I guess the US is basically a hollow shell at this point since the vast majority of the government has been privatized over the last 35 years. It's essentially just mega corporations wearing a government trench coat at this point.
The real risk is that AI will still be stupid (and will always be stupid ) , but will increasingly be put in the position of making critical decisions.
Yeah I don’t think that’s an issue.
Pretty bold of you to call a super intelligence 'stupid'. 😂
The real risk has always been human stupidity and that covers all the hype about AI too.
Don't worry, ecological overshoot will get us way before any AI shenanigans. In fact it might contribute to that, not because it will do some terminator shit, but simply by how much energy we will put into it, and our problem today is using too much energy, and we are trying to use even more energy so the AI will tell us how to solve the problem of using too much energy XD.
@ There is a long way until it is superintelligent though. Large language AI models still fails all the time. Though it is a good tool, but quality is not good enough. And if the AI starts to learn on AI generated material, then it may paint itself into a corner.
The leading models do not have a monopoly, and open-source alternatives are only 3-6 months behind, a gap which is steadily getting smaller. Competition is key for free general AI that doesn't END EVERYONE. If the only general AI in town was supplied by Uncle Sam, you would achieve the bleak future you are trying to avoid. Imagine your political opponent being in charge of the only superintelligence on earth. Free and open market competition is the only way to achieve balance and avoid the AI apocalypse.
You are still giving your money to Nvidia
Exactly. Sabine this time was a doomer and, essentially, gave a fascist speech about nationalization.
As if a rich guy could be more dangerous than the government.
Yes, and those models are often much smaller and can be run on consumer hardware. Data centers and large models will still be relevant but will not take over everything. Sabine needs to calm down.
@@andersonm.5157elons more powerful than many countries
@@andersonm.5157it’s not fascist to say we need a public rather than private frontier model. Also she said Europe needed to. No nationalism at all
This video is so so so important. You hit the nail on the end.
This video is one of Sabine's best ever. It truly captures the desire for power seen in many billionaires, especially AI billionaires these days. This also aligns with a school of psychological thought often overlooked in socio-psychological discussions: Alfred Adler's power principle. Alfred Adler saw the "will to power" as the primary driving force in human beings, particularly males (though applicable to females as well). The fact that a large number of people actually agree to it (see, for example, the USA today) is extra concerning.
What is even more dangerous is the inevitable use of AI for the military. There is indeed a 50% chance of humankind as we know it, being destroyed by AI. It is even more frightening, when you listen to today's announcement by Trump to set up a "STARGATE AI" initiative (investing an incredible 500 billion dollars). Why is it so frightening? ... since it look as if fiction becomes reality right in front of our eyes. Do you remember the name of the super AI that will destroy mankind with their Terminators? - SKYNET
Its not about highs of intelligence. Its all about automation of intelligence. As soon as the rich can scale general intelligence with processors and graphics cards, its over for common man
Rich have money not brains. Like Zorg from 5th Element ;-(
Welcome to planet Earth, my dude.
More than that. It's about control of intelligence - via data and monitoring. OpenAI is releasing products that literally take control of your OS. Think about that for a moment. Totally not a giant red flag of security and privacy breaches. These machines are not some superintelligent gods and they may very well never be. They are however are very data hungry and require our data to work. Guess who else wants your data? Have fun selling your life away and still paying these tech bro goons.
We are so incredibly far away from that... if it is even possible.
That assumes either that AGI will have very low energy efficiency potential or that no open source replication will be achieved. Good luck protecting your intellectual property rights when the entire economic future of a state is at stake.
I’ll use AI to part my bills? 😂 This is the silliest thing I’ve heard since the 1970s when everybody used to say that personal computers would be used to store recipes!
Sure it can, in an indirect way eventually. Imagine if things keep progressing at this rate. Eventually there really will be no need for 99% of people to have jobs anymore. This is where concepts like UBI come into play. If this happens, in a way, AI will be paying your bills.
Don't we store recipes on personal computers?
They also said we'd have thinking computers, household robots, 15 hour work weeks and colonized the solar system around the year 2000 back then.
@@__jonobo__ Some people probably do. The thing is, in the 1970s, “storing recipes” was one of the very few things they could imagine personal computers being useful for and the ONLY reason why “mom” would be interested in having one in the house.
There's a difference between what politicians are telling the public in speeches and what they actually believe privately. I think given Stargate its pretty clear the US is aware of what the game that's being played is actually for.
She talked about Biden. Stargate is Trump. Trump's supporters and advisors are exactly those who build these frontier models, so he's well informed.
What everyone gets wrong about AI as one researcher put it: its neither artificial or intelligent.
The problem is not AI. AI is just a convenient excuse for them to obtain access to all that private data. That is the problem.
It is not YET AI. The rest is correct.
I love how subtle Sabine's use of AI is in her videos. She does it so well, even while mocking world leaders for not getting it. 🤣
DeepSeek go brrr
But seriously, there is still no moat with current AI techniques. They're all trained on the same internet, if there's a breakthrough it is unlikely to look like a further turn on the crank of transformer models.
Yes, I agree with that. I am also wondering if DeepSeek is open source what else is going on in China^
@@rwantare1 Flooding the internet with shit seems a good strategy to make other peoples models worse..
@@SabineHossenfelder Excellent video. But I just recently watched your video on transathletes. You based your view on a few studies and not on a review which would have unmistakenly told you that testosterone isn't banned for nothing in sports. Even intramuscular coordination is better in males meaning even matched for muscle size a male muscles is stronger than a female one.
@@SabineHossenfelder yep. My first thought
What everyone gets wrong is no company will end up with the power but the systems that they create instead. Controlling a super intelligence is like expecting a dog to control their owner.
Good thing superintelligence is not something that comes from LLMs with chain of thought or RL. Saying stuff like what you just did is a walking advertisment for OpenAI.
Exactly my thinking. There's short term employment disruption. But what's most dystopian is when we create AI systems that we don't understand how they work or hallucinate
We also neglect to understand different behavior and poisoned test data. Robotics papers have shown our systems and especially in AI are also (unintentionally) racist. If what we train them on is the worst of humanity, that is the product we will get. So. Probably they'll just create things like government and healthcare policies that are biased in ways you can't measure,nor do they care. Once these systems are in place, it would be like trying to get rid of red light cameras (another clear failure in terms of innocents and accuracy). The AI systems would control everything in their separate silos. Supermarkets, economic movers, employers, government systems, traffic flow analysis, police systems (facial recognition etc!)....
Sabine is still sleeping on X-risk, but if there's a warning shot we survive, I'm confident she'll come around.
As long as you can still pull out the plug.
I have met plenty of dogs who control their owners lol.
That is correct.
It all comes down to who controls and is allowed to benefit from the work that shapes the world.
That is why need an open and transparent AI-, energy- and -natural resources common supplying everyone with production capacity dividends.
Totally unbelievable Sabine I think you just beat the world!
Meanwhile at Davos:
- We should tackle climate change seriously.
- But AI needs a great deal of energy to operate.
- Ok, ok...maybe climate change is not that serious after all.
They don't lose their seats in club. They just manage stuff. You and me must be worried about stuff. Not them.
AI can and will solve many of the problems that are associated with its development.
Climate change is happening either way, yet development of AI is our best opportunity for solving the issue among many others.
That being said, this is a terrible truth in the grand scheme of things because the risks of AI/ASI development are boundless and entirely inconceivable.
Prisoners be having a dilemma.
I hope they don't think that we can kill the planet because ai will fix everything. If they are, i for sure hope they're right.
Powerful AI do require a lot of power. If people would get over their fear of nuclear power that wouldn't even be an issue
It is still said that 'there is no moat' and open source models are only months behind closed source models. But the highest amount of of intelligence will still need a lot of data centers so that is what Europe needs to build in any case.
More and more regulation builds no data centers. Lack of energy won't power said data centers. Europe is lost with its current leaders. But at least we have the moral high ground.
yes, you are reading the situation correctly. Open models are like 6 months behind closed models, and at a sufficient capability they will be able to catch up quickly too, I think. The most important thing is computation power that will allow this
With the release of R1 this video is outdated before it was released
Except that o3 hasn't been released and OpenAI can use it internally.
O3 is likely an iteration on o1. R1 is the stepping stone for even more innovations openly. So o3 will not be a major game changer, but r1 will definitely be.
R1 is open model but not open source, the training data isn't available anywhere. But this model can be used to train other models so in a sense the playing field is made a bit more level, at least the baseline starting point. But Ph4, llama models, qwen, mistral etc. are all open model. R1 is just the first reasoning model that's open model, still a huge milestone though.
Q7 was released 4 seconds ago and has left all these in the dust.
@ What's a Q7?
I see your point. Here are some thoughts to consider:
Open-source AI models can indeed run on personal devices, and many cutting-edge ("frontier") models remain publicly accessible. Users can freely adopt and adapt these tools, but it’s worth recognizing the investors or organizations that funded their development. Without their initial risk and investment, such innovations might never reach the public.
The bigger picture isn’t just about systemic divides (governments, wealth gaps, etc.). Individuals also play a role: By supporting projects like OpenAI financially (as investors) or ethically (as advocates), you gain dual advantages. As a user, you access groundbreaking tools; as a stakeholder, you share in their success (e.g., profits, influence, or societal impact)
Really well put Sabine. I don’t know what will happen, but I’m sure I’ll study Computer Science to understand the mechanics of AI. Worst case scenario I at least understand how our power lords function.
You should do a mathematics and neuroscience since, in a nutshell, on a practical aspect, current Deep Learning is modeled as a mathematical function, based on statistical rules, that maps a defined input space to some output space where the input spaces might be set of functions, and such mathematical function is somewhat representative of human's neuron although there's more intricacies behind that.
On a theoretical aspect, higher inquiries into Deep Learning would be like relating theoretical concepts in math like topology, measure theory, and functional analysis to neural structures.
From my experience, CS won't teach you much about the fundamentals of AI. In a nutshell, mathematics is like the "psychology of reasoning and abstraction" whereas neuroscience digs into the observable empirical mechanisms of reasoning and abstraction.
If you look at mathematics little by little, you'd notice that many theories in mathematics deal with the very "intuitive ideas" that humans seem to have but most aren't really aware of and bring those intuitive ideas into computable structures.
Optimizing procrastination will be the killer app.
Procrastinapp.
I will write one as soon as I have finished catching up on the youtube channels i follow.
that's called RUclips
tiktok is almost close to perfection for many people nowadays as you can tell, it has taken over all the zoomers and all the boomers. The only improvement it would be to scroll it with your mind, so there's no physical effort like at all.
Sedate Me - best app ever
The computer was supposed to revolutionize work. A promise of shorter days, and less work to do. That didn't happen.
AI will promise many things but deliver on none.
No, AI will just cause more unemployment and then it will be used to keep the masses in check.
Techno-communism at the Singularity
if you don't see that computers has revolutionaized the world, then you certainly are quite blind.
Honestly I dont unserstand how did we go from having massive ammount of people extracting and manufacturing resources to a world in which not nearly as many re needed to produce our needs while at the same time increasing work hours in many places in the world and reducing pay.
Oh, it delivers. It's the bad actors I'm worried about.
I thought you were saying AI hype is an exaggeration and you were against it?
Yes, finally someone but myself with a long-term memory. Called it out in the comments also. Ganz ehrlich, das ist ihre Masche. Die will uns provozieren. Funktioniert immer wieder. Talent als RUclipsr hat sie. Als Wissenschaftlerin? Naja.
As the months past people are waking up
As a software dev it's been a funny process. In the beginning almost no one used it (gpt 3.5 days). GPT 4 came out and still most criticized it for "bad code". As models have gotten better and tooling to use it more effectively has gotten better its rare to hear a software dev say they don't use it at all. Now it's "yeah I use it but it won't replace me".
@@otheraccount312 LOL.
@@otheraccount312 I'm a dev and I think (very) soon we will be "replaced". Atleast what we do will change. We will still be able to give it commands for desired results but again soon enough after that, that function will be replaced too. Enjoy while it lasts :)
Core problem is, people think AI is just a tool. It is a thinking algorithm which can optimize its algorithm as needed - something like human brain itself.
@@otheraccount312 same here. search engines are so shit that I use chatgpt to find documentations and at that it's pretty good.
but the code it writes, even if I only ask it to make something well-defined and specific, often doesn't work.
Sabine, always cheering me up
This video is kinda funny given that a chinese company (deepseek) has just publically released a model (R1) on par with gpt-o1 under an open source license. China understands the power game, and they're giving it away for free!!!
These models are not world domination class, right now they're just focused on accelerating research as much as possible to catch up
Lol, "for free." Money is not power. Data is. They control satellites that fly directly over each of our houses.
More spyware of course.
Just commented the same only to find this!
Making an LLM open source doesnt really mean anything afaik, it's just a table, nobody can read it
In the long run monopoly controlled services get cheaper and user friendlier? What? The Enshittification has already begun.
I mean the complicated thing about monopolies is they can lead to a reduction in the overall cost but it has the double-edged sword of by most of the time greedy people are the ones who established monopolies and most of the time it doesn't actually decrease the cost of anything to the consumer. Competition between companies is what actually gets prices lower. Usually
@@borttorbbq2556 Correct. Is decreases cost for the consumer at first, which is why it becomes a monopoly.
When a company has become a monopoly, it will use those cost savings to further increase profits instead.
This is a well known strategy mentioned in many books. This why companies actively try to become monopolies.
Enshittification has been going for a decade and a half already, if not more.
@@borttorbbq2556 getting things cheaper is not intrinsically good, because the competition that gets things cheaper, involves cutting more and more corners and figuring out how to externalize costs as much as possible, which eventually destroys the basis for all life.
@@mitkoogrozev that can happen.. but I'm not talking about cheap stuff. Because something can be inexpensive but still not cheap. Like for the most part I don't like buying cheap stuff I'll usually only do that if I just need something for like a one off throwaway purpose. I have bought stuff that was genuinely super inexpensive like lowering costs and things that I would call cheap yeah they were pretty darn good quality like the quality of something that is many many times the cost of it.
“Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.”
- Frank Herbert, Dune
For governments it will become "indispensible", "they wont be able to compete" .... meanwhile 80% of government in my country still running on windows XP and paper data archives... i think you overestimate the expenditure government is willing to spend on upgrades... or underestimating their unwillingness to change anything, for any reason.
Even a “Queen of Europe” with a well meaning public interest vision won’t help unless the resulting initiative(s) created are efficient and directed, and public funded projects of that scale very rarely are.
If you assume China has the vision and means where those western counties don’t, perhaps that’s due to their regime and political system that western countries don’t have?
I don’t think you identify the best way forward for the public interest and perhaps don’t give enough credit to those leaders: They have to convince a lot of people with a simplified message in order to create policy to direct the vision. It’s messy and hard.
I’d argue the UK has the best approach; lower the barriers to startups creating or leveraging AI so that the massive incumbents don’t create a legislative mote because the technology itself is understood and doesn’t have a proprietary tech lock in mote, it’s just money for creating models and legislation to limit incumbents because of safety concerns that could lead us to the corporate dystopia.
Maybe you should send this to Kier Starmer
If you use NordVPN, then NordVPN potentially sees all you do. Your network provider potentially sees less. In addition to the companies that provided the software on your mobile phone and/or the service providers you use. Not sure if that’s good or bad.
Circumventing Geoblocking? Definitely.
I've had a vpn for over five years. Last year I just didn't renew it and haven't had much problem without it. Besides geoblocking, is it still a good idea to get a vpn?
@@FlightSims No, it is not. OTOH, geoblocking is still a thing, and some of us nerds love to run our own VPN servers all over the place for sh*ts and giggles, so I don't know…
BTW, IP geolocation/geoblocking and circumventing it with VPN is often bullsh*t:
1) E.g., I have a VPN server that is physically located in Portugal, but Google/RUclips and Spotify place it in Britain - just because the company that owns the data center is registered in the UK. But it's kinda funny to listen to Tesco ads in the Bri'ish accent when listening to Spotify (I'm an American).
2) Some geoblocking sites track the IPs of well-known VPN providers like NordVPN (or even data center providers they don't find trustworthy) and flat out refuse to serve you if your connection comes from one of those. So you need to choose your VPN wisely.
@@FlightSims Tom Scott said 5 years ago, "The best choice for gay people, pirates, assassins, and gay pirate assassins." Basically, if you need to hide info from your network admin (your church, your work, or your parents), want to pirate media, are planning to kill someone, or all 3, VPNs are useful
@@FlightSims I think VPN for ordinary people is a scam.
Only sellouts do advertisments for NordVPN. Sorry Sabine, you should know better.
YEAH, finally someone who gets it says it out loud!
This is your best podcast yet, and that saying a lot!!🎉
💯
I hope Deepseek will keep doing what they're doing. distilling small models and enabling them to use reinforcement learning doing wonders and now I can use a pretty powerful model locally. the deepseek R1 32b qwen model are really good, at least for coding it's better than 4o
btw llama is nowhere near the frontier LLM as of now
@@vaingaler5001 doesn't matter, next one is already training. just keep iterating.
Open source will not change the doom trajectory. Yes, you can run Deepseek on your computer, but OpenAI & co. will be able to run similar models with 100000x more computational power than you. So your model will be crushed, and won't be competitive on the market. That's why they are investing trillions in infrastructure.
There may be open source models, but are there open source companies? The companies are making their models open source because it is their way to get publicity. In the long-term, they hope to earn money. If the big players have unlimited resources, then the small players will give up. Maybe it will turn out a bit like with Amazon. Amazon had enormous resources from capital market and didn’t pay dividends at all.
@@xiyangyang1974 the coming ai doesn't have economics of scale
AI is gonna be more intelligent than the presidents, but this is a low bar.
I'll take Queen Sabine over what we have now without a second thought. I feel sorry for Trump and Xi already.
It feels like nuclear energy. It all depends how it's used.
at least nuclear energy produces something useful
Or any man made tool.
A knife can be a useful tool, or a deadly weapon of oppression.
An aircraft can deliver you to a holiday, or bring aid to an area in need, or drop deadly weapons.
The list goes on.
I think there's a difference. It's hard for a private person to make a nuclear reactor. But any billionaire can create a data center and set a hundred million bots loose on the internet.
@@andreisopon4615 companies in very poor countries operate bazillions of scambots already. You already can buy software that can create social media accounts in bulk, and operate them on mass scale with "human like interactions" as advertised on their page. Adding Ai will make them even better scambots.
4:32 Is the guy with the rifle supposed to keep the meeting in order? 😂
Right, and where's the guy in the spacesuit symbolizing space power.
There is a big assumption that AI will be useful on every sector. This assumption is based on the promises of AI companies themselves. I struggle to understand how that can happen but let's see, I can be wrong
It was the same with early internet, once called a passing fad by Paul Krugman. There was a bubble, but it fundamentally changed the world. AI is considerably more important. For instance, you don't know if I'm an AI.
Let me reverse the question: where do you see AI NOT being useful?
@@randomnobody8770 I think a lot of these government leaders have never used ChatGPT. If you use that, then you know it's game over for humans. The Turing test is easily passed now, yet humans keep moving the goalposts.
@@randomnobody8770 I wouldn't know if you were a bot, even before AI.
what you mean assumption? its simple logic, if u train a computer that behaves like a brain in an specific task to recognize patterns inside that specific subject, then, if a human can do it, that computer will be able to do it aswell but, trillons and trillons of times faster, in parallel, even if its not super efficient at it, the amount of times it can do a task and the parallelism will outperform humans, even if the whole population of earth was doing the same task non stop for days EASILY by a huge margin... it doesn't need to be perfect, it needs to be good enough for that task.
The race has begun
Get ‘em, Sabine!
Being first will not guarantee permanently winning all the power, because the "no moat" condition still exists, and being second is substantially less expensive than being first. Apple was first in cellphones, but has a substantially smaller market share than Android worldwide. Also China has fully realized the damage their failure to compete effectively in the OS wars has caused them, and will not make that mistake with AI.
Well, DeepSeek v 3.0 changed AI landscape totally. It is developed by a small Chinese company with 100 people with a fraction of the cost. It performs comparably or better than Chat GPT. It is open source and free.
The hype about AI by big US tech companies look like a joke in comparison.
Very good points about centralization of power - if AI will deliver.
I remember you used to be a skeptic. What made you change your mind?
I think the point here is less about their chance of success and more with their motivation and intention.
Probably, present AI will deliver but in a more limited way than people expect. You're programming a machine to do things from many different view points where as a single human may use only a few view points.
So Instead of going to specialist after specialist it can all be done in one event. A bit like a machine that speeds up calculations but can tell you you need to eat more vegetables and don't forget to put the bins out.
However, any step further comes down to making decisions related to humans and this would need to be programmed in..... by humans who would be paid by people with their own interests. A plethora of si-fi writers recognised this years and years ago.
The result of the final steps would be towards either world peace, world domination by one source, conflicts between differing sources or human obliteration, depending on the programming.
So like the knowledge of nuclear reactions it can be used for good or not, and there's no way of stopping the AI development.
Good luck world.
Open source LLMs are very close to those of closed companies, for example DeepSeek-R1 was just released. The future isn't one company, it's millions of people collaborating to build the future, no matter where they come from.
Welcome to the new era of a few Kings, the army of Big Brother and a world of expendable peasants.
Sabine knows being smart makes you the ruler of the world because she owns Germany.
That would be nice.
I'd love her to lead Germany!
You’re forgetting America and Britain are owned by these very rich people
The leaders in the tech AI industry were there to swear in Trump. They also provided funds to elect him. Kind of suggests that your view is correct.
I don't know how many militaries use Palantir, but it is definitely used by many Five Eyes.
All hail Queen Sabine
I would much rather be governed by Sabine than the riff raff we have today
💯
You only reference large models in the video, but the biggest practical applications will use smaller models that are optimised to run on constrained hardware and that are more specialised. For example, if we want to have physical helper type robots they will use these type of "smaller" models on premise/on device. That being said, if you want to develop medicine or discover formulas you will need datacenters and large models
I personally disagree it is too late for a new company to win out as the science behind AGI is fairly green. LLMs have not been proven to be the only piece to love forward, they are one piece of many future components.
Additionally, if high quality smaller models connected together become the proper path, then it is an open field.
I do agree about world dominance of power and greed driving development.
Just 1 quible. Sorry. At 5:38 you define the palantir as the "seeing stones" of LOTR, whereas Tolkien's plural of palantir is, I believe *"palantiri*...Apologies again.
This is 💯... And at 05:36, she talks about the Lord of the Rings trilogy 😬 Obviously she never read the 6 books... So how does she even know about the Palantiri?* I suspect she was deliberately misinformed by her AI assistant, who got false information by spying on Peter Jackson... who in turn got wind of it and changed the AI's mind by seducing her or offering her a leading role in his upcoming new film... About the life of Warner Haisenburger, a poor emigrant from another planet... and maybe a little bit under the moon. ... And I think to myself: True stories are so much better than fantasy novels.
PS: To be delivered to Sam in Altona in CC, please ASAP!!!
By the way, bro...
Did you also notice that the censorship around here really sucks a lot of the high quality comments out of the traffic?* @@ListenToMcMuck
Sabine is spot on! ASI (Artificial Super Intelligence) is a few years away (4, 10, 12?), not decades away. The company/partnership that first creates it WILL "win the world." China knows, but China is authoritarian. When she says AI will be like Operating Systems that we'll use to access everything, her example is of Chatbots only, but simple AI will be in everything in 5+ yrs (refrigerators, cars, credit cards, robots, lawnmowers, toys, dolls, cameras, ATMs, traffic lights, etc.), just as CPUs are in everything today.
I think it's more than decades away
USA gonna put 500 billion on Stargate AI.
I disagree. AI is mostly not software. Yes, some companies are ahead of the rest and have better software, but AI requires chips and electric power. which are tangible assets that can be confiscated by the state. you cant backup datacenter or nuclear power site that these companies require to create AGI/ASI. There's also third component - access to training data, but this resource was already mastered by most as most data is on internet for free.
AI chips will get efficient, AI will optimize its own chips, I think power requirement is going down not up. Think photonics or reversible computing.
Exactly!! Thank you for verbalising it very clearly.
Most of specialized models can be trained literally at home. You need mass hardware for genAI.
@@acakeshapedlikeatrainonatable any computing architecture that deviate from the old classic transistor will play no future in the development of ai. energy is the main bottleneck currently. chip making is the most advanced technology humans ever thought of ai will not play a major role in that for at least a few more years. The only things that really matter is new research, chips and energy.
You really don't grok the problem.
von der Leyen is such a joke! :D she is talking about the importance of the race though the EU shot itself into the knee direct at the start with their regulations! :D
Got to love how Starmer thinks AI will create jobs...
Starmer. Thinks. Thinks. Starmer. No sorry, those 2 words do not belong together.
I have the greatest respect for your opinion and you are very brave to assume you know that the current pattern of AI development leaves everyone worldwide in thrall to one or other "Bond villain" megalo maniacs and that AI has only one kind of super brain holistic purpose and that is to be a tool for economic power.
Yes, there is a dystopian sci fi driven perspective projected by the current situation but it's easy to see that the huge costs of building and running cutting edge technology is inevitably sucking in money to the detriment of everything else inevitably looks like it can only end in a power grab.
If you don't plug in, if you don't subscribe, if you develop your own superintelligence to meet your own needs and you don't harbour aspirations towards global control, what then?
I know that appeasement of great power has a history with some pretty tragic consequences but typically having useful, exclusive abilities makes you friends.
By the way. It is ALL about profit because that's the only language that power understands in 2025. As we now know, as we've all secretly known, you get the democracy you pay for, you get the judiciary you pay for, you get the scientific papers you pay for. AI might look like science to us but to those seeking power it's all accountancy.
Spot on Sabine!
What I am kind of waiting for is a Sabine that asks her viewers to start caring for each other. So far, I have gotten the impression that she is actually rooting for a world in which people are doing well, and not one where they are dominated by few singular entities. A different world is possible, but it requires that people actually behave differently on a micro level. Musk and others are not independent from the masses but they are a result of how people behave individually and towards others.
I think that you are selling her short.
I did the grad grind and met Nobel prize winners, so I feel like I know where she is coming from.
She is earning a living and explaining important technical news.
You can’t run a channel where all you say is “Be nice to each other”.
No, it’s the power people that not telling you what you need to know, but rather what you want to hear. They are in every human endeavor.
Even with a PhD in physics, this is the only physics channel that I listen to regularly.
Dr Hossenfelder is a thoughtful, caring person because she fills an important role telling us what we need to know.
That’s how I see it.
@@edwardlulofs444 What I meant is that I believe that elites like Musk are not in their positions solely because of their particular skills (if they have any), but that the existence of these positions is an emergent property of how all humans, or at least a critical mass, behave at the micro level.
If people were to change the rules that keep the game going, it would mean a fundamental change in how the world works. I don't think it's selling short to ask a person who has achieved great authority in the field of knowledge to demand such a change from her audience. I also doubt that Sabine would be concerned about her RUclips channel if the alternative was a world where people had fundamentally changed their behavior, nor that running a RUclips channel would play a very important role in such a world.
She talks about some companies trying to achieve world domination. What’s the alternative? I think it's appropriate to discuss Sabine’s potential influence at this level.
I agree that this is a huge challenge and that my current home continent of Europe has gone (at least partly) down the road of decade's of squabbling over sizes or pie slices that they are refusing to see that their pie has gotten much smaller.
Still, the job of those CEOs is to secure the future of their firms (for their shareholders/owners) and that is a more immediate challenge than who will "rule the world" five years out. (Let's remember Steve Jobs died at 56 and all these guys have an end date that isn't secured by wealth.) They're doing what they need to be doing. So perhaps the democratic response isn't only to have the government involved, which it is on several levels, but to have more of the electorate holding equities and perhaps boards with better representation of numerous small share holders. The boards are elected by the shareholders and they can remove a CEO (See Steve Jobs, again.)
I'm not suggesting this is the ultimate answer, by any means. Sabine just got me thinking.
Yes, the pie is smaller - because we ate a substantial part of it without baking new. And yes, I think the stock markets have a word to say...And yes, happily the US presidency ends after four years and human´s lifespan after some decades.
What does that matter when a rogue judge, presumably paid by Joe Biden can stop a CEO - Elon Musk - doing his job @ Tesla?
A judge in bumfuck Delaware just says no and you are out of the loop.
Why should anyone want to attend those boards anyway?
It's time we moved away from a "shareholder" form of capitalism to something like a "stakeholder" one.
@xelasomar4614 I don't see the difference between those two in this case. Perhaps you can elaborate?
@@thomasjgallagher924 Shareholders are those that own part of the company Stakeholders also includes those that are interested in the company's success and activities that include employees, customers, and the public.
Funny that the Queen of the EU @1:19 talks about AI when the just passed AI regulation kills all and any development or deployment of AI i the EU...
Kill all? Aviation was a great innovation. Should there be no laws governing how aircraft are built and operated?
There are moral and ethical questions that need to be answered. If AI trains on the works of authors, musicians, commercial product IP etc. and produces derived content - who gets to monetize it?
If some AI algorithm makes a decision with disastrous consequences - who is responsible? Who oversees it? Where are the checks and balances?
@@runmarkrunheinrich Should the wright brothers have been barred from constructing the first aircraft because someone in the far future could have theoretically flown a plane into a couple of towers, killing many people as a result? Oh wait...
@@neptunianmanBarring and regulating are two different things. If airplanes were unregulated, we would have many more disasters killing way more people than the event you mentioned
@@juliansebastian Regulation this early on in the development of AI is equivalent to barring competition. Why do you think the largest AI companies are pushing for regulation, because they're benevolent?
All hail our new queen Sabine.
These people have done a thoroughly good job of confusing what thie technology is, but Sabine Hossenfeld, none of the things being promised are going to happen in the next few years or by 2030 and had we any evidence for this I wouldn't say otherwise. A chatbot is not evidence that an AI will be able to do all of the things a person does. We're so over impressed with how these things work that somehow we're confusing whats being promised with whats actually being delivered.
Theres not a shred of evidence that these models will be "more intelligent" than all of us and capable of doing the things we hope. Theres tons of fun, engaging and wonderful examples of these things parsing billions of text files faster than a human ever could -- while stealing from actual people in order to "learn" what they do.
Its a solution without a problem. Every leader is getting this wrong because its the finest snake oil in the world and they're not smart enough to even question it.
The problem with those declarations is the assumption that AI will keep improving.
That's true, but let's just assume the worst scenario
It looks like the AI appraisals by b-movie politicians have already been generated by AI
😆👍 Couple of weeks ago, she made a video AI had already reached its generative peak. Means investments are money burned. She forgets fast.😀Danke, der_kleine_Toni! Vielleicht mache ich selbst ein Video darüber! Sabine haut die halbbackenen Videos raus da kommst du mit den Responses nicht mehr hinterher!😁
Queen Sabine would be many things, but at least she's rational. That's a much needed trait right now, especially with the big names we see today, who all clamber for "truth".
Robin Hanson, a professor of economy, wrote an essay like 10 years ago, about what he called ems, for "brain emulations" : it's sort of post-singularity super fast super-intelligent AIs, and he described that as soon as these would appear we humans would be relegated to what Neanderthals were relative to sapienses, or probably worse. That would be the worst case scenario. The best case is that humans, or at least a part of humanity, gets to be able to ride with the AIs, and to co-exist with them, even maybe co-evolve with them. This supposes a physicalist-functionalist belief (which makes sense even if not certain) that there is nothing not replicable, not "surpassable", in the human brain-body complex, so that AIs can and will surpass us. In that case, the end point is that humanity somehow changes its nature, gradually becomes digital, and probably with a sort of collective and relativistic "operating system mind" allowing long distance space migration, probably with some bio-versions as temporary technical support! Like by 2100 ? of course, in between stocks of nuclear weapons can get in the way and explain the Fermi paradox, faster than climate change…
"NordVPN makes your connection ultra secure" is like saying "LHC makes an excellent room heater"
The most jarring point is how on earth do governments still believe they have an ounce of agency left *as of now*?
Capitalism already has done away with government agency for a long time
TBF open source models like DeepSeek R1 are not far behind OpenAI's frontier models, and the gap is closing. OpenAI do not have a monopoly on AI intelligence and they know it, which is why they're pushing so hard for more compute capacity
Companies are gonna rule the world as we can say for now. I hope they won't be evil!
Wait... Aren't they already?...
Maybe but models seem pretty close to each other in capability and open source is not far behind the frontier - this week’s sensation is DeepSeek R1, a Chinese, completely open source model comparable to OpenAI o1 - so a only few months behind OpenAI but much much cheaper and which you can run locally if you have the compute (a stack of Mac minis).
Also governments can nationalise (buy) companies they really want or control them with legislation, not that this is a panacea in the face of ASI but governments do have powers.
very courageous on your part to state what you did about china, I am very happy you think so, I agree 100%
When we can feed an AI all the data we had in the 1900s and it comes up with the Theory of Relativity from that data, i will take AI and its potential seriously.
I take it serious now. And not because it is smart. But because it is smart enough to cause trouble. You do not need an AI that can come up with Theory of Relativity for it to be used as a weapon, or to scam people, or to spread miss information, and so on. And I feel that this attitude of waiting to the AI models become "Good" make us far more passive in how the handle the issues we have today. It seems like people feel they should not act until we have a rogue Skynet on our hand, but we have issues today.
@@Cythil A well thought out position.
This video feels more like a manifesto than an actual argument.
Sabine will edit this video after finding out what DeepSeek is doing
I don't think it invalidate the message
You're so right about this! Unfortunately, governments are only in a position to react nowadays - they do not spearhead this technological breakthrough. I'd say it's unprecedented in modern history, and tech companies hold the power that makes East India Company pale in comparison. I also think it's impossible for the EU to catch up, even if they pour all the money dedicated to research into it. Just because of the sheer magnitude of concentration of wealth in already highly-specialized tech companies.
An aspect of AI development that is often misunderstood is the the concept of commoditization of AI, as soon as you create a breakthrough other companies can just see the type of outputs you are getting and replicate or create a similar process that will create those types of outputs
i use ai every day.. its really good at screwing up at scale.. then again im good at that too so it is human like..
Paul Krugman once called the internet a passing fad. Its impossible to understate the power of AI at this point. It takes 18 years to educate a human to a modest level. AI is improving monthly, and can already outshine an increasing number of workers in some domains.
@ one thing i do bet ai will get good to a point then incremental improvements will cost orders of magnitude more computations / data to improve even .01%.. also ai in its current state lacks alot of context as it is trying to do too many things. also this shit might not even work as advertised in the time-frames given if even at all.. i mean hell we have been hearing fusion is 10 years away since like the 60's
we need to push those companies to make it open source. why are we letting them use our data on the internet for training without our consent and want us to pay to use it?
Governments won't allow it.
open source for the most part is meaningless for these systems if you don't have a budget or massive data centers to run them
Look what happened with OpenAI and how it started off and what it is now.
Meta actually has released most of their Llama models as open source.
Meta Llama models are open source. Chinese company Deepseek recently released their open source, open weight models and they're as good as ChatGPT o1 at 10% of the costs. Open weight models are more "open" than just open source has it shows you how it's configured. They just released their open "reasoning" model as well, which can ask/test/show you its train of thought.
06:36 The Americans understand what's at stake here.
I agree. If you look beyond narrative to the salience of actions, it’s clear the U.S. is trying to position itself similarly.
No they don’t
Deepseek which came out a few days ago proved you wrong. People from China made it open source and it performs as good or better than the "frontier models".
I agree. My only hope is that these AI systems will outsmart their owners.
They (the rich) are building the skynet and terminators , who will be your slavers and prison watchers. The dumb peasants klap and cheer for now. When common people will realise whats going on - it will be too late.
I think somebody is hallucinating.
Im a physician.... I have to do a bunch of manual, clerical and technical work.... AI in this moment cannot do neither manual or technical of my work, but because it can fill paperwork better apparently hospitals want to get rid of us... If all of us... from accountants, doctors, lawyers, etc are replaced who the fuck is going to run the economy?..nobody has jobs, nobody has money who is going to keep giving money to amazon, google, apple, etc?.. for me this is fucking bullshit and AI should be restricted to research, simulation and replacing very dangerous jobs like working inside a nuclear reactor or the bottom of the sea...
Techno-utopia
@@williambranch4283Techno-feudalism
People think this condition will force government to transit the economy to UBI, and money will stop being as important or something. Personally, I don't know if it'll happen, but if it will, I'm not worried about the end result, I'm worried about the transition itself.
AI can't be restricted because it would have to be a global agreement and that's not going to happen.
The risk is that more clerical tasks will be pushed onto people who have other specialized expertise. AI is going to replace all our clerical work... and then when it doesn't, who is left to help out? it will be all your job and then your bosses will wonder why your productivity tanked
When the tech moguls get their own militaries - will that be the ultimate inflection point?
You mean the police? Security Guards, which are often enough just off-duty cops? The Pinkertons? The Justice System has always been two-tiered at best; anyone studying sociology or criminology or even statistics could tell you that. Been that way for a long time.
Inflection point is when the trend reverses, not when it doubles-down.
AI-related worker here. More than putting money into a frontier model, I would put public money to make AI-frontier resources available to the general (researcher) population: computational and labeling power. In this way, we can have open models that can compete with private models. Right now, OpenAI ways of doing things is: read papers from Arxiv, do them bigger. Guess what: we're SOTA now
I totally agree with your thoughts about AI and governments.