Tap to unmute
How AI Will Fail Like The Music Industry
Embed
- Published on Apr 14, 2026
- In this episode I compare the future of AI to the failure of the music industry in the early 2000's.
Open Source AI Models: huggingface.co/
I’m running the LLM on a Mac Studio  with a 4TB hard drive and 128G of RAM.
My Beato Club supporters:
Justin Scott
Terence Mark
Jason Murray
Lucienne Kilpatrick
Alexander Young
Jason Wagner
Todd Ladner
Rob Kline
Nicholas Long
Tim Benson
Leonardo Martins da Costa Rodrigues
Eddie Perez
David Solomon
MICHAEL JOYCE
Stephen Stubbs
colin stead
Jonathan Wentworth-Linton
Patrick Payne
MATTHEW KARIS
Matthew Barouch
Shaun Samuels
Danny Kurywchak
Gregory Reedy
Sean Coleman
Alexander Verbitskiy
CL Turner
Jason Pappafotis
John Fulford
Margaret Carno
Robert C
David M Combs
Eric Flatt
Reto Spoerli
Herr Moritz Adam
Monte St. Johns
Jon Beezley
Peter DeVault
Eric Nabstedt
Eric Beggs
Rich Germano
Brian Bloom
Peter Pillitteri
Piush Dahal
Toby Guidry
Music








I'm glad you emphasised how it was running offline and not connected to the Internet. This bit changes everything.
Yeah, everybody knows eventually ai will be run locally but that's still quite a ways a way from being practical. The good stuff like video takes really expensive GPU's and to run that locally also takes forever! Maybe in 20 years it will be more feasible depending on how computers evolve.
@CalifaAzul
Note: Beato did it on his Mac from local store.😊
It really doesn’t most people are always connected to the internet. It only changes something for the MOST privacy conscious. Most people will run local models with web search access.
@bradlyscotunes9156he didn’t say what kind of Mac…was it a $599 NEO OR a $6,000 Mac Studio 😂😂😂😂….the fact of the matter is….you can do all this stuff on your phone…if you need that much privacy buy a very expensive computer…but here’s something nobody is talking about here…all these Ai models are from China….so if you’re running them locally how do you know what’s in the software itself 😂😂😂….oooopps I just triggered a bunch of paranoid people 😂😂😂
@CalifaAzul 1 year, not 20 years.
7 hours ago, in another timeline, Skynet just added the name *Rick Beato* to the list.
😂
hahahahahha
It's alright, West World showed us how to defeat it
LOL :)
He'll be back.
I never thought Rick Beato teaching me how to install a local AI on my computer was going to be in my 2026 bingo card
Wow. You weren't kidding.
Family Guy made fun of that bingo card cliche on a recent episode.
yea :D but why it still think make us fail? As software dev, i can use while doing my job and it speed up a lot. its fact i observe and i dont have any reason to belive some old guy with 5m sub
... or Rick Beato getting up on the privacy advocacy soapbox
Are you certain that’s really Rick? 😂
The fact that OpenAI has a privacy and security team of humans that review the prompts of users that are flagged tells you that your data is not at all private.
Flagged prompts reviewed by a safety team do not mean all user data lacks privacy. Automated systems flag a small number of conversations for abuse, security, or safety checks. Human review is limited to those cases and done under strict access controls. Many online services use similar oversight to prevent misuse. Privacy controls also exist, such as turning off “Chat History and Training,” which prevents conversations from being used to train models. Human review for safety does not mean all prompts are openly read or that personal data is broadly exposed.
Those guys from OpenAI are worse than Big Brother :) Long Live Rick!
OpenAI... even the name is a lie !
@healthspiracyofficialof course not lack privacy. Im pretty sure all prompts and profile of users are sold to palantir and ad companies
In British Columbia, Canada we had a psychotic person do a mass shooting in one of our schools, killing many people. They used ChatGBT to help plan everything. It set off alarm bells at ChatGBT which they were alerted to, but didn't bother alerting anyone. They may have been able to prevent the loss of those lives.
I didn’t know running these locally was this accessible already. Thank you Rick!
He probably has a badass mac studio or something but yeah, it works! Apple was smart to focus on this kind of architecture, I agree they will win big. The other thing is that you don't need to even pay apple, you can get a $20/month subscription and it's fine. Most of these ai companies are not profitable at all. It's so competitive that they may never be. Free local models just make that even harder.
@ryanchappell5962 you can run LM Studio on any 16GB Mac. I do on a 2021 M1 Macbook Air with 16GB RAM. Cheers!
yes but you can do simple things...such as a recepy....a ten years businnes plan with a more complex scenario can be done just with much calculating power.
I'm running several models locally, using llama.cpp, and ollama. That's not a problem, it has been available for quite awhile. The problem is that you can't run any even modestly large models. The best so far that fits 16GB NVidia card is a highly optimized OpenAI 20 billion parameter model with 4.25 quantization. It leaves very little memory for context window. It's tiny by today's standards, and can run just ~20 tokens per second, it's not great for reasoning.
But there is another, even more important reason why you need datacenters -- training models. That requires all the performance you can get and even then training large models will take days or weeks non-stop, at which time dozens or GPUs will fail. You can't even start training on a single GPU at home.
Comparing performance of a single GPU in your home computer to a modern datacenter is like comparing strength of an ant to that of an elephant.
@ElementaryWatson_fafo But you do not need so many for the average use case, this is the exact point I think you missed. The things you discuss are niche, not everyone needs a full giant feature LLM only researchers and fortune 50 companies, everyone else can use a local instance. If I had an elephant it would bankrupt me, I only need an ant to do my recipes and emails. It is funny that most talks about AI dovetail right into white elephant allegories so easily.
Getting most people understand that LLMs are much broader than ChatGPT is the single biggest step.
Really? The SINGLE. BIGGEST. STEP. You sure about that...
Getting people to understand that LLMs suck ass is far more important.
First clue, they don't learn, prior to that it can't comprehend.
Easy enough to understand.
@TapTwoCounterspellI read this in Tim Robinson’s voice.
True, but ChatGPT, Gemini, and a few others are more than enough.
I really hope so. I spent 68 minutes this morning on hold to my insurers.I eventually "spoke" to a machine that couldn't help, and suggested I called the number I had just called them on.
@Robert-d5b1o apparently 'ai' killed 180 school girls in Iran. The people who built the missiles, transported the missiles thousands of miles to the other side of the world, the people who hooked the missiles up to a computer. All they were 'blameless'. It was apparently a software program on a computer that caused the war crime.
@Robert-d5b1o But the automated voice tells you right upfront that your call is very important to them. Are you suggesting that's not true?
@fredbloggs6080 As I always say, if my call was important you would fucking answer it
😂😆
HELL YA’ 🙏🏻 AMEIN
cellist, steve kramer 🎻
Rick, good video but one clarification. Some of the data centers being built are to train these LLMs. The LLM you download to your computer are already trained. So while I agree we will not need large data centers to house trained LLMs, there is no way to "train" an LLM on your local machine because it requires huge amounts of data that are not available on someone's personal PC.
On that note, I wonder if we'll start seeing computers coming with an LLM already installed (with the option of 'expanding" it for a fee) and then the AI on YOUR computer gets trained to YOU. Without having to have a LLM trained by millions of people - just you; the ultimate in catered response.
Lol! I used the same icon for my profile. So when I saw your post, I couldn't remember writing, but our writing styles are very similar. It took me a minute to realise what had happened. Also, you are correct!
Yeh but the model is local and the data gets compressed. It will happen
@rgrandles Compression is not the issue. Compression is only good for storage or transmission. Training and processing the data is the issues. You don't process compress data. All data have to be decompressed to be processed.
Off line LLMs no doubt can create a recipe but if they remain isolated from the WWW their advice will become dated; try getting the best flight to London to visit the Museums.Great video and enlightening, but I thin the story is not yet finished about how the works.
Never thought I'd see this: Rick Beato - the privacy advocate! Bravo for bringing these concepts to an audience that's direly in need of guidance.
Sure - me neither, but he is completely advising the wrong thing
@dinochris2136why
@dinochris2136elaborate
"Privacy" in the sense that everybody elses scrapped, stolen data in some chinese model is great to have and use but uploading your own search is of course unacceptable. Typical...
Attorneys are now putting into their contracts a notice telling their clients not to divulge anything relating to their case to an online AI chatbot, because AI doesn't have attorney-client privilege, and AI prompts are discoverable.
Same reason many lawyers are not using AI which is accessible to the general public to write advices or draft documents
@cooldebt so, they're using paid private versions. Absolutely law firms are using AI. It's like having an army of paralegals, and it will change the paradigm of why a big firm was "muscle". It wasn't about prodigy lawyers, it's always been about resources, i.e. money.
If a law firm uses a non-public AI, none of that will be discoverable, no more than the individual lawyers' PC's or file cabinets.
Rick Beato has made me the invisible man because I have written about how music died in the early 1980s, when producers dropped composers, arrangers and musicians and then did it all themselves.
However, I believe I am visible if I reply to other people's comments. Could you please just acknowledge to be able to see this comment. I am curious if it is visible. Thank you.
@alexk3088same with the federal government they're either using on-prem models or they've contracted with frontier model providers so that they don't do any extraneous data retention or use the data for training their models. On the other hand, the federal government is also now getting hit with FOIA requests for information about the prompts that they're using with AI.
I wonder if, if one were a pro se litigant, would the courts extend "attorney-client privilege," which to me would be no requesting conversations with oneself, which in a 3rd order mutation I would suggest that my searching the internet via AI is essentially me researching LexisNexis or even me asking a paralegal to look it up?
edit: I'm really half trolling, but you never know. That said, pro se litigants simply shouldn't be doing the pro se thing, in most cases.
I was a drafting student in 1984. Up until then hundreds of draftsmen were hired by companies to hand draw blue prints. That year computer aided drafting came online. The school bought a huge main frame computer the size of a refrigerator to run a bunch of terminals because that's how it was done at the time. It cost them millions of dollars. In 1985 Apple released a drafting program for Macs. The same year that mainframe became a boat anchor.
You may appreciate this. I graduated as an Engineer in '85. At that time, the University was using an IBM mainframe with punch card input: One line of FORTRAN code per card, inputted into a card reader, then compiled to the mainframe. The first firm I worked for was still doing drawings with pen and ink on mylar. One of the guys could hand letter indistinguishable from LEROY lettering - he was that good. Around 86 or 87 we got a 386 PC and early versions of AutoCAD and within a few years, hand drafting became an obsolete skill - which was too bad IMO, as those guys could produce drawings that were far better in appearance than the CAD drawings at the time. (And I bet you know what Pounce, Scum-X, and LEROY were).
I will always remember that drafting programme which entailed two people spending 20 minutes to draw an outline of a square house.
@stirzjuststirz5077 Modern cad drawings are still utter stupid, complex, crude rubbish. As informative as text-speak.
Lol, I was taking old time drafting courses at the local vo tech in 1984. I asked, what are those 2 guys doing in that little room adjoining our classroom? The teacher said they were learning CAD (computer aided drafting). I asked if I should be learning that. The teacher said, well it costs an extra $60 an hour to even be in that room. 😂
Meanwhile at work, our middle aged draftsman was being forced to learn CAD. Every time I walked by his office, he was either swearing or throwing something. 🤣🤣🤣
I was a machinist and CNC programmer. When I started out there were no "computers" or CNC machines. Once computers came on the scene I had to take your drawings and draw them on the computer before I could program the machines. Ten or so years later? the prints were gone and the solids were taking over, handed to CAM from engineering to program off of. Copying, actually redrawing from scratch, your drawings to the computer software, i.e., mastercam, datacut, gibbs, whatever, used to take a lot of time and was one of my favorite tasks. Plotting toolpaths old style on a computer was always and adventure.
I used to travel in time, blowing my old mind when Beato shares real truth histories like this. The message is powerful, my soul is a poor passenger.
As a software engineer and a musician, I did not expect Rick would talk about Huggingface and LLM studio haha. Very cool tho!
Right?!? 😂😂😂
Can't wait for collab with Primeagen
Maybe we plug in all of Spotify into the data centers, start a gnarly unstoppable feedback loop and watch it all explode.
This would be such a cool animated short story
I'm all for it! 😎
I have no butt, and I must crap.
You have no idea how hard I'm laughing right now!
@electric7487 Perfect paraphrase is perfect.
There are different types of data centers: training those LLMs and those where you use those LLMs
I love Rick's videos, but in this video, he has no idea of what he's talking about
@t@taicunmusic’s a good example of a little bit of knowledge is dangerous. But he is not completely wrong. Not all of the big players will survive in the end, but there will always be the need for these large AI farms. Running a heavily quantized LLM falls short when you move past asking for a recipe. There are semi useful home assistant level right now. You need $40,000 worth Mac studios to run a half decent LLM to come close to frontier models at anything. But that pretty cheap compared to the billions it take to make the model.
I really wish this comment was higher. The LLama LLM 4 I mess with on my Android was trained on a really big cluster to make that model portable. On a side note, I think that same extreme distinction between training and working model is THE ai achilles heel. We don't seem to have that in biology.
And the ones where every last single thing you do is screen shot, saved, compiled, and used against you.
@scottlatham9437exactly this ☝🏻
video is 60% ads and Rick still pockets 50k likes. that's why I respect this man.
I think you’re right to a point. Most of us do not need foundation models to do typical AI work. However, some tasks do require models that are more capable than what you can realistically run at home. I also agree that the gold rush to create AI companies may be a bubble that eventually bursts, since the market likely cannot support the number of companies that exist right now. But we are early in the technology development, so the crystal ball is a little cloudy right now.
Yeah I pay for Claude pro because local LLMs aren't quite to Opus 4.6 level (reasoning/speed) yet (at least on my mac mini). It's quite possible that these data centers will eventually be specialized hubs for medical research, military, or government operations. The government isn't asking LLMs for recipes.
Then you have no clue about Agentic AI or OpenClaw and what those mean to the future.
@brianmi40 doubtful, they will remain a massive security risk for quite some time
Not "massive" by any means. OpenClaw has already reviewed the existing skills and has partnered with VirusTotal for skill security after industry giant Cisco built Skill Scanner.
Picking apart the code for a sleeper skill is almost trivial already. Sooner or later you have to run the code embedded in the skill and you can see the errant calls or data transmission.
We have a dozen or more competitors (known, could be hundreds laboring in silence), all with varying efforts to address any and all security risks. The race is on.
If you KNOW what it's capable of, you will have no issue grasping the level of effort that is already going on behind the scenes to perfect it, make it stupidly easy for all and make it safe for the masses.
We now have AI able to outperform humans in code testing for flaws and exploits. There's NO QUESTION that this ability won't be pointed at all skill creation and sooner rather than later if many aren't already in a testing/improving cycle.
The first person to bring to market this ability both safely and capably for the masses will start a unicorn company. The race is on and the user base for all companies providing it will blow right through 10 million users overnight to and beyond 100 million users.
@rickyspanish4792and when has that stopped anyone who wants wealth?
Napster didn't kill the profit model, it was the digitizing of music and the ubiquitous MP3 format and the internet that did. Once people could trade .mp3 files all over the place, no one needed to buy music anymore. Napster just took advantage of the tech that change it. It was also things like incredible amounts of storage available on flash drives and smart phones which allowed users to download a ton of MP3 music and take it with them. As an electrical engineer in the early 80's, we'd sit around and talk about the future of music and video. CDs were just starting to come out and we used to say the only thing keeping "record" sales afloat was the lack of cheap, easy huge amounts of storage to keep your digitized content on. It didn't take long for Moore's Law to allow that to happen.
Napster absolutely killed the profit model. Without a widespread, decentralized platform to share music p2p, adopting mp3 files (or any other compression format) as your primary way to consume and share music would have not exploded they way it did, and adoption would have not reached the critical mass fast enough to overturn the industry: smaller groups of 'pirates' or 'bootleggers' would have been aggressively targeted with litigation, and niche hardware and software would have been hamstrung by lawsuits. CD-R drives have a massively larger hand in destroying the industry than a single compression codec, but again, they would have never been used to the extent they were for music without Napster. Smart phones had literally nothing to do with the switch to digital, they didn't appear for years after digital music files became the primary way people listened to music. Ipods were massive until smartphones killed them, and the Ipod was only created in response to the trend.
Music being able to be digitized and easily stored/shared en masse certainly paved the way for the value of copies of recorded music to become nothing, but Napster was the tsunami that introduced it to the masses and created a market and demand for software and hardware support that the industry couldn't possibly fight.
CD-RW and I are offended by the lack of respect
I used to dream back in school in the 80's of being able to listen to my home music collection on my walkman remotely.
@dans5033 Napster exposed that a majority of albums released on CD only had ~1 to 2 good songs and the rest was filler.
Running an already trained model locally is something else than training a new model locally. Training is what requires such a huge amount of processing power.
No it doesn't. Data centers are mostly for handling everyone's requests. Only partly for training.
Big chinese companies do the training and open source the weights
Okay, but haven't we already hit diminishing returns with chat bots? My dinners aren't that extravagant
@shanescott8241 lol but actually not really. ever since o1 (one of the first LLM thinking models) was released at the end of 2024, big tech has been scaling inference hard. On ARC-AGI-1, one of the hardest AGI benchmarks, we've gone from GPT 4.5 achieving only 10% to GPT 5.4 achieving 94.5%. GPT 4.5 was released the start of last year, GPT 5.4 like a week ago...
I agree, in Zhenya Ji et al., 2025, the authors highlight that, although training is more energy intensive and harder to be executed in a distributed way, different estimates says it accounts only for 40% (Google), 35% (Meta) or even 20-10% (NVIDIA, AWS) of the total energy consumption (the other part being used for inference).
So use local models!
I watching this and get an Ai ad lol
Heaven for me was a TEAC 4 channel reel to reel tape recorder.
Nagra 4.2 + BMT3 Mixer and X - Tal synch + QSLI Pilot Playback.
Right or the porta studio 4 track was magical enough
First was a 4 track Portastudio on cassette for me. Running Cakewalk DOS for MIDI and SMPTE sync!
Anyone else rock the Akai MG 12 track combo
I upgraded from a Teac A2340 to an Otari MX5050 8 track. Then sold both machines and got a SEk'D ARC 88 A/D converter card that let me record 8 tracks at 24/96 on my hard drive and mix in Samplitude Studio. My wife was extremely grateful, LOL.
Rick showed me this and Angine de Poitrine on the same day. Life changing.
If someone can't write a simple email to their boss explaining why they won't be at work, then there is no hope left for our species.
...the reply comes back from the boss' AI. The 2 people have probably never met.
It is not just about not being able, it is about saving time.
@RetroPixelDen You will stop being able because you'll rely on this time saving measure constantly.
Should phone in, like when they sack U,😮
@althejazzman True for some people but not all.
The Digi 001 changed my life, threw me into the world of Protools in 2001 and I never looked back!
Is there an Ai agent that can get a home printer to print? That's the one I'm investing in.
This is also a winner topic to respond to when asked in an interview “what did you have the most trouble with on your last job” question.
Step 1) buy a Brother printer (i.e., DO NOT buy any HP printer), Step 2) make sure you have the latest drivers. That's it!
0:11 I remember the Digi 001. Game changer. A few years later (still in the mid-2000s), I got the Digi-002, Mac G5, Avalon Mic Pre, and a U87 for my home studio. I was one of many that found that gear combo pretty much killed recording studios for tracking vocals.
Local models are great and I love LM Studio, especially paired with Docker Desktop full of MCP tools. For a chat bot at home, maybe it's all you need and the privacy points are real. What you're missing is that the capability of these models is distilled from much larger models that can only be trained in a datacenter. Qwen is also Chinese so keep that in mind. From the perspective of a tech person this sounds like is an old doctor looking at X-Rays in the 70s and saying it's all the imaging we need and computed tomography and magnetic resonance are just buzzwords. The applications for the AI created in these datacenters is just starting to emerge and it goes far beyond chat bots.
It's rare that you see a older gentlemen understand this in such detail. You're a smart man and I'm glad you're here to teach the masses.
The problem with your theory is that the data centers aren't being built to provide recipes. The AI that is, and will, be used in medicine and the military will require computing power far beyond your laptop at home.
LLMs do not scale. Once the compression was released (which is how it “advanced” so quickly, it’s 100% dependent on the injection of new novel human created material to generate novel content. No amount of CPU can overcome that. If too many people fail to produce new content because they are reliant on these tools to function, the model will eat itself as will our society. Fortunately it’s all hot garbage and people are figuring that out quickly
As someone who administered massive arrays of machines, the demand will always exist for the facilities, my guess is law firms will be the next to dump massive cash into this. The multiplier is enormous.
@ckatheman This. The models are just ingesting original works and recombining it. They are already training off of AI created content and are starting the process of recursively eating themselves alive.
@whateverwhenever8170 Absolutely true. I administer a large auto OEM’s HPC and I believe they will eventually do the work this cluster does off prem. Our last build out earlier this year is likely the last time they’ll drop 10M on a cluster. When this cluster is fully depreciated in about five years it will probably be more cost effective to do their aero, cfd and crash analysis off prem.
Partially correct, yes those, but moreso Agentic AI / OpenClaw, etc. Agentic AI use will DWARF all chat to make it a ROUNDING ERROR in tokens consumed. One tester has burned through a BILLION tokens with his OpenClaw. That would take 7 MONTHS on Rick's computer...
Would love to see you have Trent Reznor on for this discussion as it would be really insightful.
as long as he doesn't "sing" 😂😂😂
Rick Beato has made me the invisible man because I have written about how music died in the early 1980s, when producers dropped composers, arrangers and musicians and then did it all themselves.
However, I believe I am visible if I reply to other people's comments. Could you please just acknowledge to be able to see this comment. I am curious if it is visible. Thank you.
@ThomasJLarsen You wrote about your issue in a way that made me think you were delusional at first, haha! Is Rick Beato a mad scientist running experiments on people whose comments he dislikes? Do you have to wrap bandages all over your face so that you can be visible in public? 😆
...But your comment was visible enough that I was able to reread it and realize he just has blocked your non-reply comments such that they're invisible.
@ThomasJLarsen sadly I can't see you.. but I can read what you write!!! yaaayyyy
@ThomasJLarsen HG Wells, is that you?
Rick Beato as an AI ambassador, very interesting.
You can can run these models at home, you certainly can not create and train them. That still needs the giant room supercomputer.
You can absolutely train them at home. Obviously, you need a strong computer, but people train them at home all the time. They're not nearly as large as the big AI companies' models, but 32gb models are about the limit of some home-trained models I've seen (for image generation)
The same way is hard for someone to develop, at home, an interface connecting a guitar to a computer. But, since someone has already developed this, you can use it at home. That’s the same thing.
Last I heard openai is spending about 3/4 of its compute on training. Eventually most compute with be for inference (user tasks), a lot of that will be local.
Fine tuning is training, just not from the scratch, and you can do that with small enough models. This has been so for at least a few years. Of course it's very expensive with bigger models. Most people just don't know anything about AI, despite the information being freely available.
my analogy for Rick is... its like saying "I bought a guitar so now I can play every song known to man". No.. it doesnt work that way.
Thx a million, Rick! Love you ❤
You had me at Digi 001 / PowerMac G3 in Frutiger Aero / Liquid Glass design 😂❤
Data centers will be used for massive-scale industries (medical research, military logistics, or high-end film VFX) that require more power than a local machine can provide.
True, music did not get more complex. But science/military/medical/VFX do.
and mostly algorithmic surveillance and fingerprinting people
To say nothing of basic corporate uses. Wendy's AI to take your order won't be running on a home PC...
And that "massive scale" that is coming, is coming like a FREIGHT TRAIN: Agentic AI / OpenClaw and all those suggest for the future. Google didn't DOUBLE their data center construction budget this year for no reason.
misused*
Fixed it for ya
@brianmi40 - OpenClaw already works with locally hosted models.
We were using live capture and digital processing on Amigas and Ataris in the 80s for a lot less.
I had an Amiga 2000 running Deluxe Paint. I thought at the time it was the greatest thing ever. Making vids on Paint and getting them on VHS.
True, I remember fondly my Atari's - but recording was pretty much solely midi interface linked to external recording. There was not enoughRam or Hard drive space for serious recording of Analogue. Ram was maxed to 4 Meg - and was expensive. The hard drive on the studio computer - the Atari Mega with 4 meg memory - 40 Megabytes. Huge at the time for a home PC. That said, linked in with the Akai 1214 desk and 12 track recorder, more that workable for a home studio. I still use Cubase!
@michaelwallace4298 I had an audio capture card for an Atari ST with a wav editor. Processing anything normally warned you that it might take several days!
At 1/50 the quality for 5 seconds. Even nowadays the difference between «what average people might need» and what a flexible, professional top-quality solution cost is where the difference in money is. Even if your home setup can do 85% of it. Applies to any media. Live events isn’t run on cameras with mini HDMI to HDMI adapters, USB microphones or elgato-stuff. If you are recording a small orchestra or a really high quality band date, you still need the $2000/day studio.
And yes obviously, binding large medium and large companies into large enterprise solutions for AI, where the cost of licences will creep upwards until they have a constant squeeze on every dollar earned in all of large business is their whole idea. ChatGPT et al, as enshitified themselves out of reach for normal people allready by making their lower tiers worse.
Take me back to the 70’s please
At the moment local models are no way near to Claude or whatever. Your local model is outdated already, unless you’re able to train it with new data. The big change will be when local hardware will be capable of learning new data in real time. It’s not now.
2030
You can build RAG and MPCs locally
@kurono1822RAG is not training. It gives you much less control and it’s so difficult to design.
people just think all the answers AI give them are correct, the few time i have tired it out i've asked it things about the suject i know alot about and it just given me vague or wrong googled answers.
@krusher74you used free models. Pay for a pro and you’ll be surprised.
They pulled the plug 🔌
I had both of those Mac towers sitting there. What a time! I was recording at home, and in a band, it felt so revolutionary, because it was!!
Mac Flashbacks...I had the G4 between those 2
I still have and use the Blue and White Mac along with an M Audio Delta 1010 and running Opcode’s Studio Vision Pro. I use it in tandem with a rack of midi modules. Still functions great. It’s nowhere near the top in sound quality but for a budget hobbyist, it’s great to work with. I do master everything down using a more up to date MacBook Pro running more current software. I just found it interesting that he still has those computers.
Same! I was doing multitrack digital recording on my Mac Performa in 1994! It was so much cleaner than the cassette 4 track. But it could crash and lose HOURS of work in an instant.
Rick, I love how your videos just start and you get right to it. There's so much ado in most videos, including "coming up" peeks and then an intro video - like they're 60 MInutes or something. Keep it up - strong work.
Late 90's I had a Compaq computer with a SoundBlaster sound card and interface, with Cakewalk. Could record up to 6 tracks before exceeding the Ram capabilities.
yeah I remember that card...great note.
Sound blaster and cakewalk are still excellent music making tools to this day , if the system still functions and best of all , OFF LINE.
😅 OMG. 😳 I haven’t thought of SoundBlaster in years! Flashback!
Late 90s ;?? Motu and then. Later pro- tools around way before then but yeah compute is and will be the issue
Quantum computing is the real game changer
The latest Bruno Mars single sounds like someone asked AI to write a Bruno Mars song.
* album
Bruno Mars asked Bruno Mars to write a song using AI. Of course I’m joking, but I soon won’t be. It’s inevitable, bro. This is the direction it’s going as quality continues to improve in leaps and bounds. It’s too profitable to fail.
Had the 002 Console Rick 24 yrs back
My dad is still using his 002 rack to extend an Apollo Twin via ADAT. It’s good gear.
Wow. In 2000 I was 9.. how awesome is to have experienced folks like you sharing their views here . Gratitude
Jean-Michel Jarre recorded Oxygène in his kitchen and dining room back in '76. He was ahead of his time in a number of ways. Since Y2K, the music industry caught up to him. Now we're onto the next phase (good or bad as that may be).
JMJ was my gateway into another world of music ... first heard Oxygene on a souped up quad system in a friend's car during a blinding snowstorm ... and never looked back.
@jetfueled2563 When I was in high school, it was already retro. But I used to walk home from work with Oxygène playing on my Walkman. Very entertaining. Who needed drugs when you had that atmospheric 3D-sounding stereo swirling around in your head? I learned a *lot* about how to use ambient mixing and big-ass delay from listening to MJM.
Watch "Jean-Michel Jarre - Live in Sevilla - ARTE Concert" starting from 55:20. He has not changed. He has always been interested in new technology.
An insane amount of money and energy consumption for something we don't really need.
I dunno. My GF called tech support for a specialized software problem and they couldn’t help her after an hour of trying. Then she asked ChatGTP and she got it solved in 5 minutes. AI filmmaking is coming for a fraction of the cost and no Hollywood gatekeepers will stop independent voices. Do we “need” it. I guess not. But we don’t “need” A LOT of stuff depending on your definition. Of course, the real reason the big guys won’t go out of business is AI is a national security issue and they will be generating their own power. It’ll be interesting to see if they generate it more efficiently than ever. They’ll be recycling their own cooling water as well.
Agreed! Cheers, Hans! ✌️
Yes, we need it. Any company or country that doesn't develop or adopt AI is going to disappear or be controlled by those that do.
@JDGauchat Until their computers go down, then the old hands will keep working the old way, using their brains instead of expecting a computer to tell them something that's been centrally curated.
@-Thunder Don't forget that most software problems arise because we are being dragged by the nose into doing more and more digitally, which could just as well be done in another way.
I still have my dad's Fostex 4-track from the 80s. Still a dope piece of equipment after all these years.
Is that the one that recorded to cassette tape?
dope means good, right?
@dustbinfilms Absolutely 💯
@darryldouglas6004 Yep! Still have a bunch of old cassette masters too.
Fostex x15?
Rick, you've always entertained. Tonight you INFORMED ....thank you.
Stay tuned for the new Chef Beato channel!
👏
Cocktail; want a bellini, a sidecar, what?
We clearly still love and need you when you're 64, Rick. This is a watershed realization. Good job, Bro.
What does it change?
@pcatful
If you didnt grok that from what Rick said/did, there's no helping u.
@pcatful everything
Problem one with the boss vacation email… you ask your boss vs tell your boss 🤪
Yup. Crude and stupid. Like so many AI fake videos. ..."Sir Winston Church Hill said at the time."
Incredible video! Thank you for this!!
I recall that in the late ‘90s Macromedia had a software tool named Deck II which allowed users to record digital multitrack audio directly into a Mac.
Macromedia ruined Deck II, then Apple bought it and killed it. OSC was the original developer. It was wonderful especially when Digidesign said their hardward would only do four tracks per card. OSC doubled that count with just their software.
I love that we've both got G3 and G4 powermacs in the backgrounds of our videos 😂
Bet you his doesn't smell as Eucalyptus as yours! 😅
I loved the look and maintainability of the PowerMac G3 towers! So easy to open up and upgrade RAM, hard drives, or peripheral cards. 👍
ok AI generated advert
It was beautiful.
@papalaz4444244 dammit! You figured it out! I’ve still got a pallet of these 25 year old computers that I have to sell and I would have gotten away with it if it weren’t for you nosy kids! 😂
@papalaz4444244 AI generated advert...for a product that does not exist?
lift the latch, open the machine. It was great...until my house got hit by lightning.
What I love about Rick Beato that this episode shows is that I don't feel he's pushing an absolute Right or Wrong on a topic, but he has the skill - and enjoys - holding an issue like this up to the light for people to think about, discuss and debate (as in the many thoughtful comments), without having a personal or financial stake in the issue. So refreshing.
In 1996 I bought a program to record music called Saw Plus. The software cost $800 and was on one floppy disk!
😆 1 disk?! That's kinda hard to believe!
I think most people don’t realize that you need a big data center just for training a model. But to run a model, you just need a fairly decent PC they call this an inference computer.
Yes, also as soon as somebody figure out a way to split the training in multiple small part then the data center is not even required anymore. If so, everybody could built custom AI from small parts trained independently. One promising way would be to use the continuous thinking approach were all independent small expert model write into a scratchpad that they pass to each other. Look for HRM 27M model beat GPT, they used this technique and got really good result with a tiny model.
WIth a fairly decent PC you run a model that has very little context window compared to what you may be used to from using ChatGPT or Claude
@walterdeminicis737 We're looking at the problem from the wrong angle. Artificial intelligence is currently completely useless. It's not me who says this, but the numbers themselves. Could we create encrypted, open-source models that would provide some kind of support for everyone? Maybe.
@leucome Seems blockchain would be ideal for this.
@walterdeminicis737but when will the average user need such big context windows or such deep “thinking” capabilities? Come on it’s just the CEOs of these companies trying to convince you that you need all of this but you don’t really need it.
I was in the printing business back then and same thing happening. Suddenly these imaging businesses started popping up and we could send them files to output to negative and separations. Then PDF came out from adobe and everything moved toward digital and all those expensive imaging centers disappeared after spending all that money on equipment!
Scanner drum and 4-colour reprographics went in the bin no long after clients started doing the work on their new Mac computers
good points on the data centers. Keep rockin', Rick!
But if they don't have the data, how do they continue to train the models?
They are already models within the singularity, creating and training themselves (self improvement). The human element is shifting already to governance and integrity as these agents don't require our data anymore because they can make it themselves.
And training is much more resource intensive than running the inference models. Perhaps the AI companies will have to rely on sales of trained models (or weights for open source models) rather than actually running the inference engines.
it not really Ai if its not thinking for itself.
They create synthetic data.
"In modern Large Language Models (LLMs), synthetic data for ongoing Reinforcement Learning (RL) is created through iterative loops where models generate their own training signals, reducing reliance on human annotation. This process typically follows a Self-Improvement or Reinforced Self-Training (ReST) paradigm."
@krusher74 no current AI thinks in any capacity; it's all just a very sophisticated predictive generator. AI engineers like Yann Lecun (who pioneered the "transformer", a fundamental part of the LLM method) have lamented the fact that because LLMs have been so [superficially] impressive with a huge WOW factor, that it will set back true development of "proper" AI because so many people think "we're almost there!". I can a imagine a time, perhaps decades from now, when it just seems silly to say that we had AI in the late 2020s. Metaphorically, I don't even think we're at a point now where it's the Wright Brothers flying that first airplane compared to jets we have now. It's more like hot air balloons, which kinda looks like flying and is impressive as hell if you've never seen a human up in the sky, but ultimately it's not exactly part of flight science and aerodynamics etc.
Planning a trip is absolutely part of the fun and adventure for me personally
Most people are super lazy. When you realize that it opens many doors for making money if that's your thing.
Using a LLM and creating one capable of compiling all the data needed to make a substantial one i have to imagine require different computer infrastructure horsepower.
Not really true technicall, but we are starting to see the first segmentation where chips are tuned better for inference than training, so it will be a growing thing. Agentic AI and OpenClaw tell us we need > 4X data centers for all the demand headed our way, so Rick just doesn't have the background.
@brianmi40Keep drinking the Kool-Aid. Who made those projections?
Google has TWO Os in it. I'm not going to spoon feed some insulting M0R0N.
YOU get off your @ss and go figure out WHY Google very near DOUBLED their AI data center construction budget for this year, FAR exceeding even the MOST aggressive analyst predictions.
Or, maybe figure out what the hell SaaS-pocalypse was and why it erased $1T in the stock market overnight.
@brianmi40Anti-AI hype is the new hype, not to mention, wishful thinking. I remember how people thought that the internet was a fad in the early 90s.
Figured it out last night, made a video today. Sure seems like the most knowledgeable guy on THIS topic.
Your video helps to confirm that these monster data centers are not intended primarily for citizens. We’re paying for military, government, and surveillance technology. Basically control.
Bingo
Boom goes the dynamite.
Agenda 2030 🧃
@JustinTime1991 Explain it to me.
@BsGameDevNo, bot. I won't
What you say makes sense. But there is one thing missing: Someone needs to make the LLMs. They (still) require a lot of data and computing power. (Or am I missing something?)
I had the same thought. The data centers would still be needed, I would think.
Yes, and the LLMs will also need to be updated over time to keep their accuracy. Who takes care of that?
Also how will it continue learning if it’s not connected to the net.
Do they not have enough data centers for training, yet?
@muthahumpa2715Seriously? Rick’s computer was only offline to prove that processing was local.
The models have to be trained, so they are still going to charge you for that. Things are going to change fast, but there are always going to be better options available that you pay for.
The "better" options are FREE. Gemini Flash is 100% free, 4x faster than what Rick showed on mid-range Apple silicon, and has Internet Search for latest info as well as tool calls. No learning/installation/firing it up, just open a browser tab...
Sure, WANT to learn about AI? go for it, run inference at home. But for the silly questions he asked there's zero reasons for the average person to expend all that effort to get a poorer answer after more effort at home than using a frontier 100% free model online.
So, NO, data centers won't go "idle" due to home inference. No one that's seen or has even a tiny clue about Agentic AI or OpenClaw would ever even think such a silly thing and that ignores that NO COMPANY will run their company AI on "employees home computers!!!".
There is a point where you can only reach a certain level of music production and fidelity.
Very intersting. I'll be downloading and trying one of these this week. Thx.
I would say that revolution started in 1993 with Atari St 1080 machines running EMAGIC Logic Audio with a 4 input audio card. We used to stripe them to the 24 track tape machines, for running midi stuff and 4 tracks of digital audio trickery
I remember that process. Ugh.
Putting the MIDI port in the ST was a real winning move. Amiga needed a extra box.
I remember paying $250 an hour in the mid-1980's for studio time. Ouch! That's like $1000 an hour now.
You got robbed. Lol.
Right, I recall even 300$. It would cost a band thousands just to put a simple demo tape together. Good times bad times.
Yes, running an LLM (even a big one) on a home computer is feasible (especially if you have a big GPU). But you still won't be able to train your own models on one. Training requires enormous amount of compute power and memory bandwidth. Those massive datacenters are being used to train the LLMs not really run them. For example the GPT-3 model with 175 Billion parameters took 280 GPU years on a $10k GPU (Or 10 days on a cluster of 10,000 GPUs). GPT-4 has 1.8 Trillion parameters.
@RickBeato pin this comment to the top.
Add more context to your video.
Please do something to present full and real information.
There's many great video on youtube about LLM and NPU, I'm afraid this one is missing a lot of context.
I don't know why people keep repeating this like it's fact. The data centers are NOT primarily for training models, they are for handling requests. Only about 30% is for training, which is still a lot, yes, but it's not the main purpose.
@visionofdisorder And YOU missed my point. Datacenters are required for training. You cannot train large models on your home computer or even your on-prem servers. Sure, fine, datacenters also distribute the load of millions of model queries from users. But each of those are computationally trivial compared to training tasks and can be done anywhere. So I disagree with you, the primary purpose of large datacenters are for tasks that can't be done elsewhere regardless of how much of the compute time is used for those complex tasks.
@visionofdisorder Yes they are primary for training, because big AI companies business model is not build around online request handling.
Yeaaah... And like, the stuff he asked, just typing it in google gives you the answer, no AI sub required... But what if I want to create pics and video from prompts? What if I want the AI I chat with to have a memory big enough to know what we talked about last week? My PC can't pull that off, unless I had like a +5k PC, right?
This is the most important AND useful video I’ve ever seen on RUclips.
I have just bought a barebone mp3 player. No wifi just an SD card with your music and audiobooks. I love the distraction free listening!
It has RUclips too, apparently. 😂
I bought a cheap Chinese phone with an SD for music. I don't use it for anything else. It's great and works anywhere without ads. I also think it's funny since car makers steal your phone info for sale when you pair it to your car. I don't pair my real phone and they get nothing but an mp3 phone 🤣
@gabibonza No, his phone has RUclips. Or his desktop or Laptop.
🙂👍
If a MP3 player has a SD card, it is not barebones!!
This is why they want to kill the personal computer market. They want us to have barebones systems that require us to connect to them to use these models. Watch home computing advancement completely plateau and even start reversing. We’ll stop getting newer powerful hardware and start getting cheaper and cheaper and less powerful machines.
Yes the PC is their Kryptonite
Don't use it. Everything people have to do is: shut it off.
Exactly! And they already started by making RAM unaffordable...
yes 100% THIS, e.g. corps are killing PC gaming with overprice to force everyone to use GeForceNOW-datacenters to turn videocards-GPUs into overpriced monthly subscription.
currently corps are just milking AI selling 0 progress junk hardware, selling upscale and fakeframes instead of actually improving GPUs, true-FPS performance and performance/dollar. nvidia is actually selling overpriced 5000 cards to themselves and putting it in financial reports to fake good sales to investors. because we are very close to actually realistic movie-level graphics in games. all they have to do is make AI GPUs where graphics is AI generated from primitive graphics (for cohesion). the GPU of the future is 99%+ AI cores and the rest is small weak rasterization cores just fast enough to run primitive graphics. OR just increase VRAM and connect SSD/DDR5-6 directly to a GPU chip for proper no long PCIe no lag DirectStorage = realistic models, textures, effects = realistic graphics. btw thats BoltGraphics approach but they've killed it with overpricing DRAM/SSDs.
the problem is the GPU capable of actually realistic movie-level graphics its the endgame GPU, there's nothing to improve or sell after that each year for 10x overprice to noobs. also it'd make pro cards and servers for movie studios useless, kill nv and amd graphics businesses. and consoles. you'd be able to buy a GPU and use for 10-20years or more until it breaks no need for upgrade.
Jensen leatherjacket Huang's solution for this is to kill PC GPUs/gaming with overprice altogether and force everybody to use GeForceNOW-datacenters. because you want that sweet better graphics, don't you.
tldr nv won't sell next-gen realistic gaming in discrete GPUs, they'll sell it as a monthly overpriced subscription. because they need ROI on all datacenters they've built and building right now.
@JoseLuisOchoaPadilla actually they started by making GPUs unaffordable first.
nv to sell laggy GeForceNOW subscriptions, amd to sell 0 progress obsolete slow consoles/handhelds hardware - extend consoles cycles, milk the market by selling old high-margin 0 progress junk. amd produces basically all relevant consoles and handhelds so they like PC GPUs being overpriced and really not having a GPU market share at all as long as they can sell junk for consoles @10x overprice.
there's a startup called BoltGraphics that threatened to release 10x faster raytracing GPUs than 5090 simply by using way more RAM. so overpricing RAM killed that. not to mention nerfing everything alltogether on PC/smartphone market by making everything low-VRAM/low-RAM junk or overpriced AF.
4:25 SO HOW DO I DOWNLOAD IT???????? 🤔🤔
Search “LM studio”
Now this is music to my years.
Love your content, Rick! Thank you for putting the hard work into what you do for research.
This is not a slight on Rick Beato, who clearly a bright guy who’s put a lot of thought into this, but I love that I’m getting insightful tech advice from a musician/producer. What a crazy time we live in.
Things are getting hard. You have to adapt as a musician
I’ve used both self hosted LLM’s and the frontier models for my business as a software architect and system administrator. Unfortunately the local models require very expensive hardware. Right now it is much cheaper to use paid models vs running good local models. I am hoping that more specialized models that are smaller will become available, so more affordable hardware can be used. I don’t need a model that can create a chicken dish. I need one that is an expert in the technologies that I work in.
Just give it ... time. 😀
Macs do really well for local LLMs because of the unified memory for graphics and cpu.
This is a great real world insight. The 'right now' clause is, I think, the essence, particularly against Rick's key points - that computers become more capable over time, and that privacy and control of personal data is of financial value, and therefor factors into the business case.
Sure give it 10-20 years and you can maybe do something on your local computer that takes a ton of distributed computer today. But in 10-20 years those data centers are going to be able to do stuff with models you can’t even imagine
Correct, and even the frontier models are far from perfect. Big tech is on a race to get models so good that they can improve themselves.
Local models are good enough for some cases, but will be way behind for years to come. Also those labs might stop releasing them, like what is seemingly happening with Qwen from Alibaba
Thank you Rick. There really should be more people/youtubers/influencers like you so this message could be spread out even further.
Recipes? Tourism? Multi stage complex tasks fall apart on these small local models that hallucinate with confidence. Frontier hyperscaler models are the only ones that can handle large multi-step workflows with large context windows.
Exactly - the tourism one wouldn't even have reliable up to date info when he's not connected to the internet.
for now, yes. As agentic modalities evolve and rag tooling becomes more mainstream this is a relatively trivial issue. Just like this didn't happen overnight for the studios it will not happen overnight with AI, and key hyperscalers that build around an ad model will (unfortunately) continue to succeed and outcompete
@iansaunders4877 yeah, for the meantime, people are gonna keep using chatgpt and claude that constantly improve rather than the local ones
Time to talk about Angine de Poitrine…
I agree.
Time for an interview.
Yes, I'm very interested in hearing Rick's take on Angine de Poitrine.
I cannot wait!
@sjeanmacleod😂
i agree !
BTW Rick, watch Angine de Poitrine. If you didn`t do it yet. 😂 It`s a duo from Quebec, Canada.
Awesome demo and insight. I didnt know i could run these locally.
Meanwhile, everyone now wants physical copies of music.
Alot of fans treat them more as merchandise like t shirts than as a way to listen to music. Skewed the vinyl market in a not so good way for those buying to spin on their turntables.
Back in the day, everyone was telling me albums were great, and that digital CDs would never take over analog because analog was so warm. That was a good argument for a little while, but eventually, everyone went digital.
Yeh agree with ed our local record store at least the one I used to go to about every month or so used vinyl cassettes to CDs very reasonable price but the new release and reissues continued to rise and still that way currently
"everyone now wants physical copies of music."
Hardly.
Don't be fooled by stories of "a 125% surge in vinyl album sales over the previous year!!!""
20,250 vinyl albums - up from 10,000 last year - is still NOTHING.
Not everyone, just a very vocal minority. Everyone is using streaming services without thinking twice about it.
You’re a hero. Did your homework, learned how its gears move, and now you’re much more optimistic. I remember commenting on the video of AI Music taking over and replied that all that slop only makes real musicians talent even more valuable and sought for. And here we all are. Listening to you.
And subscribing and liking!
You earn it every time!
Cheers!
The Law of music is Music is born from the emotional experience of life and told through sound/s they feel best express and convey experience.
The technocrats don’t care about a real world. They’re playing with mud and play dough making us a new one. It will be sterile. Non private except for them eventually turning us into socialists or worse. It will be the great equalizer where humans as individuals will all be the same and have their fingers burned so as not to identify who we are.
Great video! Thank you!😊
Greetings from ROC NY. This is a really insightful take, Rick. I’m a software engineer who’s been following this situation closely. I also believe that local models are the future, but I never thought of the recording studio analogy. Brilliant. Apple, too, agrees with you, I think. Their upcoming machines are built around this type of workload. They need it to provide AI in a way that protects privacy, but it benefits us in other ways.
Is this why they have been troubling over Siri for so long?
I really hope you’re right and that there’s a damn good excuse for their development of AI to languish so. As this other commenter mentioned, Siri has been seriously neglected so hopefully there’s a good reason.
They will fight hard to ensure you cannot have local inference and are reliant on subscriptions that govt can also spy on.
5090s? gone.
cheap memory? gone.
The apple route might be the saving grace
> Is this why they have been troubling over Siri for so long?
That is my impression, yes. If you think about it, Apple just happening to be less than competent at the technology isn't very convincing. They have the resources and infrastructure to do well. Their value proposition ("your data is more secure with us") means that LLM screw-ups that Meta or Google see as acceptable can't be good enough for them. This is a very difficult technology to tame because of its unpredictability, and once you give it agentive power to do things on your behalf, it becomes downright dangerous.
Apple is using Nvidia GPUs with loads of AI coded into the chip. Apple is not doing anything on their own as far as the AI goes.
Data centers likely won't go out of business. They will be used for Medical, Astronomical, Military, Science, Logistics--large scale industries and disciplines that require far more computational power that most of our needs on local machines. In addition, image VFX and movie/film generation may need more computational power than most local machines would provide.
All of that, except for video, will be DWARFED by Agentic AI / OpenClaw type uses. There's no "likely" necessary; just WON'T. Google didn't mistakenly DOUBLE their budget for new data centers this year because they "failed to call Rick and ask him".
I’ll agree with your assessment when we can download the LLMs on our phone. Most people find it a chore to go to their laptop outside of work hours. But as soon as the phone can handle this locally off its own hard drive, I agree with you.
Thank you sir! You provide a great service
The G3 was the shiznit back in 1999-2000. 0:58
I forgot all about shiznit!! 😂😂😂
@brianmessemer2973 you forgot when shiznit was the shiznit???
Can confirm, it was certainly the aforementioned "shiznit"
You can still stream modern RUclips at 720p on a powerpc G5. Early 2000's mac was awesome lol
G3 was to that era as Apple Silicon is now
Instead of AI music, Rick, give us your opinion on Angine de Poitrine. As a proud quebecer, I love those guys and their "out of this world" music ! (Notwithstanding their musicianship !!!)
Bot spam
Still have my Pioneer Reel-to-Reel.
Jealous. I wish today that I had kept mine.
Wow, your best show ever... its something Ive been feeling that there is a disconnect happening... thnx!
wasn't prepared to have you install LM Studio!! hehe
I see artists like Kalax who have female AI created singers as well as music created with AI. Very sad as we wont have as many human singers etc.
There are a lot of singers in their 40s. We learned to sing in the 90s.
Chewing gum for the brain, with no nourishment!! Real music, is like real food.... enjoyable, flavourful, sociable, memorable and so satisfying that you go back for more!!😁🇬🇧🇬🇧
@JetCityMattI don’t you get what they’re saying
It's no surprise you are insightful and intelligent about fields outside of music production. Finally a less bleak outlook on AI
Rick, you have (accidentally I think) made one of the most subversive, revolutionary videos of the current era. I now realize that our media sort of covered up the full implications of Qwen, DeepSeek, and the like. Yes, the were more energy efficient at the same tasks, but for the reason that these models could run on small computers, decentralized, not so they could squeeze more profit out of data centers. This makes the benefits of AI available to all, with full equity, while eliminating all the negatives of AI: enriching tech bros, environment destroying data cetners, handling our liberty over to a techno-feudalist nation state's full surveillance, and so on. This literally stabs the vampyric heart of these techo bro companies. Kudos. This is deviously great.