Tap to unmute

How AI Will Fail Like The Music Industry

Share
Embed
  • Published on Apr 14, 2026
  • In this episode I compare the future of AI to the failure of the music industry in the early 2000's.
    Open Source AI Models: huggingface.co/
    I’m running the LLM on a Mac Studio  with a 4TB hard drive and 128G of RAM.
    My Beato Club supporters:
    Justin Scott
    Terence Mark
    Jason Murray
    Lucienne Kilpatrick
    Alexander Young
    Jason Wagner
    Todd Ladner
    Rob Kline
    Nicholas Long
    Tim Benson
    Leonardo Martins da Costa Rodrigues
    Eddie Perez
    David Solomon
    MICHAEL JOYCE
    Stephen Stubbs
    colin stead
    Jonathan Wentworth-Linton
    Patrick Payne
    MATTHEW KARIS
    Matthew Barouch
    Shaun Samuels
    Danny Kurywchak
    Gregory Reedy
    Sean Coleman
    Alexander Verbitskiy
    CL Turner
    Jason Pappafotis
    John Fulford
    Margaret Carno
    Robert C
    David M Combs
    Eric Flatt
    Reto Spoerli
    Herr Moritz Adam
    Monte St. Johns
    Jon Beezley
    Peter DeVault
    Eric Nabstedt
    Eric Beggs
    Rich Germano
    Brian Bloom
    Peter Pillitteri
    Piush Dahal
    Toby Guidry
  • MusicMusic

Comments •

  • @EricDoriean
    @EricDoriean Month ago +593

    I'm glad you emphasised how it was running offline and not connected to the Internet. This bit changes everything.

    • @CalifaAzul
      @CalifaAzul Month ago +5

      Yeah, everybody knows eventually ai will be run locally but that's still quite a ways a way from being practical. The good stuff like video takes really expensive GPU's and to run that locally also takes forever! Maybe in 20 years it will be more feasible depending on how computers evolve.

    • @bradlyscotunes9156
      @bradlyscotunes9156 Month ago +15

      ​@CalifaAzul
      Note: Beato did it on his Mac from local store.😊

    • @dorjedriftwood2731
      @dorjedriftwood2731 Month ago +8

      It really doesn’t most people are always connected to the internet. It only changes something for the MOST privacy conscious. Most people will run local models with web search access.

    • @SuperMachead1
      @SuperMachead1 Month ago

      @bradlyscotunes9156he didn’t say what kind of Mac…was it a $599 NEO OR a $6,000 Mac Studio 😂😂😂😂….the fact of the matter is….you can do all this stuff on your phone…if you need that much privacy buy a very expensive computer…but here’s something nobody is talking about here…all these Ai models are from China….so if you’re running them locally how do you know what’s in the software itself 😂😂😂….oooopps I just triggered a bunch of paranoid people 😂😂😂

    • @r2dxhate
      @r2dxhate Month ago +4

      @CalifaAzul 1 year, not 20 years.

  • @g54b95
    @g54b95 Month ago +2648

    7 hours ago, in another timeline, Skynet just added the name *Rick Beato* to the list.

  • @HenryFondlelini
    @HenryFondlelini Month ago +2180

    I never thought Rick Beato teaching me how to install a local AI on my computer was going to be in my 2026 bingo card

    • @BillPeschel
      @BillPeschel Month ago +21

      Wow. You weren't kidding.

    • @UnchainedEruption
      @UnchainedEruption Month ago +6

      Family Guy made fun of that bingo card cliche on a recent episode.

    • @emircanerkul
      @emircanerkul Month ago +2

      yea :D but why it still think make us fail? As software dev, i can use while doing my job and it speed up a lot. its fact i observe and i dont have any reason to belive some old guy with 5m sub

    • @havokca
      @havokca Month ago +1

      ... or Rick Beato getting up on the privacy advocacy soapbox

    • @eletricavenue
      @eletricavenue Month ago +17

      Are you certain that’s really Rick? 😂

  • @J_Next
    @J_Next Month ago +243

    The fact that OpenAI has a privacy and security team of humans that review the prompts of users that are flagged tells you that your data is not at all private.

    • @healthspiracyofficial
      @healthspiracyofficial Month ago +7

      Flagged prompts reviewed by a safety team do not mean all user data lacks privacy. Automated systems flag a small number of conversations for abuse, security, or safety checks. Human review is limited to those cases and done under strict access controls. Many online services use similar oversight to prevent misuse. Privacy controls also exist, such as turning off “Chat History and Training,” which prevents conversations from being used to train models. Human review for safety does not mean all prompts are openly read or that personal data is broadly exposed.

    • @vitorgeraldes7054
      @vitorgeraldes7054 Month ago +8

      Those guys from OpenAI are worse than Big Brother :) Long Live Rick!

    • @manuel-xax
      @manuel-xax 29 days ago +9

      OpenAI... even the name is a lie !

    • @sladewilson8241
      @sladewilson8241 28 days ago +1

      ​@healthspiracyofficialof course not lack privacy. Im pretty sure all prompts and profile of users are sold to palantir and ad companies

    • @SaturnaIslandLive
      @SaturnaIslandLive 24 days ago

      In British Columbia, Canada we had a psychotic person do a mass shooting in one of our schools, killing many people. They used ChatGBT to help plan everything. It set off alarm bells at ChatGBT which they were alerted to, but didn't bother alerting anyone. They may have been able to prevent the loss of those lives.

  • @ovivan79
    @ovivan79 Month ago +1361

    I didn’t know running these locally was this accessible already. Thank you Rick!

    • @ryanchappell5962
      @ryanchappell5962 Month ago +18

      He probably has a badass mac studio or something but yeah, it works! Apple was smart to focus on this kind of architecture, I agree they will win big. The other thing is that you don't need to even pay apple, you can get a $20/month subscription and it's fine. Most of these ai companies are not profitable at all. It's so competitive that they may never be. Free local models just make that even harder.

    • @medonk12rs
      @medonk12rs Month ago

      @ryanchappell5962 you can run LM Studio on any 16GB Mac. I do on a 2021 M1 Macbook Air with 16GB RAM. Cheers!

    • @lcvscv
      @lcvscv Month ago +5

      yes but you can do simple things...such as a recepy....a ten years businnes plan with a more complex scenario can be done just with much calculating power.

    • @ElementaryWatson_fafo
      @ElementaryWatson_fafo Month ago +106

      I'm running several models locally, using llama.cpp, and ollama. That's not a problem, it has been available for quite awhile. The problem is that you can't run any even modestly large models. The best so far that fits 16GB NVidia card is a highly optimized OpenAI 20 billion parameter model with 4.25 quantization. It leaves very little memory for context window. It's tiny by today's standards, and can run just ~20 tokens per second, it's not great for reasoning.
      But there is another, even more important reason why you need datacenters -- training models. That requires all the performance you can get and even then training large models will take days or weeks non-stop, at which time dozens or GPUs will fail. You can't even start training on a single GPU at home.
      Comparing performance of a single GPU in your home computer to a modern datacenter is like comparing strength of an ant to that of an elephant.

    • @christophersteen1873
      @christophersteen1873 Month ago

      @ElementaryWatson_fafo But you do not need so many for the average use case, this is the exact point I think you missed. The things you discuss are niche, not everyone needs a full giant feature LLM only researchers and fortune 50 companies, everyone else can use a local instance. If I had an elephant it would bankrupt me, I only need an ant to do my recipes and emails. It is funny that most talks about AI dovetail right into white elephant allegories so easily.

  • @GiI11
    @GiI11 Month ago +338

    Getting most people understand that LLMs are much broader than ChatGPT is the single biggest step.

    • @TapTwoCounterspell
      @TapTwoCounterspell Month ago +3

      Really? The SINGLE. BIGGEST. STEP. You sure about that...

    • @Frank-sirisO
      @Frank-sirisO Month ago +27

      Getting people to understand that LLMs suck ass is far more important.

    • @Frank-sirisO
      @Frank-sirisO Month ago +15

      First clue, they don't learn, prior to that it can't comprehend.
      Easy enough to understand.

    • @Flush_You_Basterd
      @Flush_You_Basterd Month ago +2

      @TapTwoCounterspellI read this in Tim Robinson’s voice.

    • @ThinkerMan-v6g
      @ThinkerMan-v6g Month ago +4

      True, but ChatGPT, Gemini, and a few others are more than enough.

  • @qwerty69-i7m
    @qwerty69-i7m Month ago +384

    I really hope so. I spent 68 minutes this morning on hold to my insurers.I eventually "spoke" to a machine that couldn't help, and suggested I called the number I had just called them on.

    • @lloydcolemacromediaflash
      @lloydcolemacromediaflash Month ago +10

      ​@Robert-d5b1o apparently 'ai' killed 180 school girls in Iran. The people who built the missiles, transported the missiles thousands of miles to the other side of the world, the people who hooked the missiles up to a computer. All they were 'blameless'. It was apparently a software program on a computer that caused the war crime.

    • @fredbloggs6080
      @fredbloggs6080 Month ago +22

      @Robert-d5b1o But the automated voice tells you right upfront that your call is very important to them. Are you suggesting that's not true?

    • @markc8401
      @markc8401 Month ago

      @fredbloggs6080 As I always say, if my call was important you would fucking answer it

    • @onetwothreefour-s1n
      @onetwothreefour-s1n Month ago +3

      😂😆

    • @stevekramer6573
      @stevekramer6573 Month ago +1

      HELL YA’ 🙏🏻 AMEIN
      cellist, steve kramer 🎻

  • @davesmith9217
    @davesmith9217 Month ago +170

    Rick, good video but one clarification. Some of the data centers being built are to train these LLMs. The LLM you download to your computer are already trained. So while I agree we will not need large data centers to house trained LLMs, there is no way to "train" an LLM on your local machine because it requires huge amounts of data that are not available on someone's personal PC.

    • @danielclark1314
      @danielclark1314 Month ago +8

      On that note, I wonder if we'll start seeing computers coming with an LLM already installed (with the option of 'expanding" it for a fee) and then the AI on YOUR computer gets trained to YOU. Without having to have a LLM trained by millions of people - just you; the ultimate in catered response.

    • @myguitarsandme
      @myguitarsandme Month ago +1

      Lol! I used the same icon for my profile. So when I saw your post, I couldn't remember writing, but our writing styles are very similar. It took me a minute to realise what had happened. Also, you are correct!

    • @rgrandles
      @rgrandles Month ago +1

      Yeh but the model is local and the data gets compressed. It will happen

    • @maplenerd22
      @maplenerd22 Month ago +4

      @rgrandles Compression is not the issue. Compression is only good for storage or transmission. Training and processing the data is the issues. You don't process compress data. All data have to be decompressed to be processed.

    • @simonkerr7422
      @simonkerr7422 Month ago +12

      Off line LLMs no doubt can create a recipe but if they remain isolated from the WWW their advice will become dated; try getting the best flight to London to visit the Museums.Great video and enlightening, but I thin the story is not yet finished about how the works.

  • @krishnanpillaipakkamnatt7483

    Never thought I'd see this: Rick Beato - the privacy advocate! Bravo for bringing these concepts to an audience that's direly in need of guidance.

    • @dinochris2136
      @dinochris2136 Month ago +2

      Sure - me neither, but he is completely advising the wrong thing

    • @JayJams-i7k2p
      @JayJams-i7k2p Month ago

      ​@dinochris2136why

    • @binglebonglebellybarrelblast
      @binglebonglebellybarrelblast 29 days ago

      ​@dinochris2136elaborate

    • @f4ust85
      @f4ust85 27 days ago +1

      "Privacy" in the sense that everybody elses scrapped, stolen data in some chinese model is great to have and use but uploading your own search is of course unacceptable. Typical...

  • @vwestlife
    @vwestlife Month ago +773

    Attorneys are now putting into their contracts a notice telling their clients not to divulge anything relating to their case to an online AI chatbot, because AI doesn't have attorney-client privilege, and AI prompts are discoverable.

    • @cooldebt
      @cooldebt Month ago +8

      Same reason many lawyers are not using AI which is accessible to the general public to write advices or draft documents

    • @alexk3088
      @alexk3088 Month ago +31

      @cooldebt so, they're using paid private versions. Absolutely law firms are using AI. It's like having an army of paralegals, and it will change the paradigm of why a big firm was "muscle". It wasn't about prodigy lawyers, it's always been about resources, i.e. money.
      If a law firm uses a non-public AI, none of that will be discoverable, no more than the individual lawyers' PC's or file cabinets.

    • @ThomasJLarsen
      @ThomasJLarsen Month ago +31

      Rick Beato has made me the invisible man because I have written about how music died in the early 1980s, when producers dropped composers, arrangers and musicians and then did it all themselves.
      However, I believe I am visible if I reply to other people's comments. Could you please just acknowledge to be able to see this comment. I am curious if it is visible. Thank you.

    • @MichaelFaughn
      @MichaelFaughn Month ago

      ​@alexk3088same with the federal government they're either using on-prem models or they've contracted with frontier model providers so that they don't do any extraneous data retention or use the data for training their models. On the other hand, the federal government is also now getting hit with FOIA requests for information about the prompts that they're using with AI.

    • @randallshutt2907
      @randallshutt2907 Month ago +2

      I wonder if, if one were a pro se litigant, would the courts extend "attorney-client privilege," which to me would be no requesting conversations with oneself, which in a 3rd order mutation I would suggest that my searching the internet via AI is essentially me researching LexisNexis or even me asking a paralegal to look it up?
      edit: I'm really half trolling, but you never know. That said, pro se litigants simply shouldn't be doing the pro se thing, in most cases.

  • @StephenHammThereminman
    @StephenHammThereminman Month ago +264

    I was a drafting student in 1984. Up until then hundreds of draftsmen were hired by companies to hand draw blue prints. That year computer aided drafting came online. The school bought a huge main frame computer the size of a refrigerator to run a bunch of terminals because that's how it was done at the time. It cost them millions of dollars. In 1985 Apple released a drafting program for Macs. The same year that mainframe became a boat anchor.

    • @stirzjuststirz5077
      @stirzjuststirz5077 Month ago +26

      You may appreciate this. I graduated as an Engineer in '85. At that time, the University was using an IBM mainframe with punch card input: One line of FORTRAN code per card, inputted into a card reader, then compiled to the mainframe. The first firm I worked for was still doing drawings with pen and ink on mylar. One of the guys could hand letter indistinguishable from LEROY lettering - he was that good. Around 86 or 87 we got a 386 PC and early versions of AutoCAD and within a few years, hand drafting became an obsolete skill - which was too bad IMO, as those guys could produce drawings that were far better in appearance than the CAD drawings at the time. (And I bet you know what Pounce, Scum-X, and LEROY were).

    • @andyharpist2938
      @andyharpist2938 Month ago +12

      I will always remember that drafting programme which entailed two people spending 20 minutes to draw an outline of a square house.

    • @andyharpist2938
      @andyharpist2938 Month ago +15

      @stirzjuststirz5077 Modern cad drawings are still utter stupid, complex, crude rubbish. As informative as text-speak.

    • @monicapincombe5282
      @monicapincombe5282 Month ago +10

      Lol, I was taking old time drafting courses at the local vo tech in 1984. I asked, what are those 2 guys doing in that little room adjoining our classroom? The teacher said they were learning CAD (computer aided drafting). I asked if I should be learning that. The teacher said, well it costs an extra $60 an hour to even be in that room. 😂
      Meanwhile at work, our middle aged draftsman was being forced to learn CAD. Every time I walked by his office, he was either swearing or throwing something. 🤣🤣🤣

    • @j.dragon651
      @j.dragon651 Month ago +4

      I was a machinist and CNC programmer. When I started out there were no "computers" or CNC machines. Once computers came on the scene I had to take your drawings and draw them on the computer before I could program the machines. Ten or so years later? the prints were gone and the solids were taking over, handed to CAM from engineering to program off of. Copying, actually redrawing from scratch, your drawings to the computer software, i.e., mastercam, datacut, gibbs, whatever, used to take a lot of time and was one of my favorite tasks. Plotting toolpaths old style on a computer was always and adventure.

  • @neorvo5599
    @neorvo5599 24 days ago

    I used to travel in time, blowing my old mind when Beato shares real truth histories like this. The message is powerful, my soul is a poor passenger.

  • @Nikkon
    @Nikkon Month ago +126

    As a software engineer and a musician, I did not expect Rick would talk about Huggingface and LLM studio haha. Very cool tho!

  • @tribe-jam2112
    @tribe-jam2112 Month ago +119

    Maybe we plug in all of Spotify into the data centers, start a gnarly unstoppable feedback loop and watch it all explode.

  • @Rasenschneider
    @Rasenschneider Month ago +102

    There are different types of data centers: training those LLMs and those where you use those LLMs

    • @taicunmusic
      @taicunmusic Month ago +52

      I love Rick's videos, but in this video, he has no idea of what he's talking about

    • @scottlatham9437
      @scottlatham9437 Month ago +19

      @t@taicunmusic’s a good example of a little bit of knowledge is dangerous. But he is not completely wrong. Not all of the big players will survive in the end, but there will always be the need for these large AI farms. Running a heavily quantized LLM falls short when you move past asking for a recipe. There are semi useful home assistant level right now. You need $40,000 worth Mac studios to run a half decent LLM to come close to frontier models at anything. But that pretty cheap compared to the billions it take to make the model.

    • @hajosmulders
      @hajosmulders Month ago +10

      I really wish this comment was higher. The LLama LLM 4 I mess with on my Android was trained on a really big cluster to make that model portable. On a side note, I think that same extreme distinction between training and working model is THE ai achilles heel. We don't seem to have that in biology.

    • @growlith6969
      @growlith6969 Month ago +3

      And the ones where every last single thing you do is screen shot, saved, compiled, and used against you.

    • @alejandromagana1554
      @alejandromagana1554 Month ago

      @scottlatham9437exactly this ☝🏻

  • @Hopyboby
    @Hopyboby 23 days ago

    video is 60% ads and Rick still pockets 50k likes. that's why I respect this man.

  • @tekperson
    @tekperson Month ago +120

    I think you’re right to a point. Most of us do not need foundation models to do typical AI work. However, some tasks do require models that are more capable than what you can realistically run at home. I also agree that the gold rush to create AI companies may be a bubble that eventually bursts, since the market likely cannot support the number of companies that exist right now. But we are early in the technology development, so the crystal ball is a little cloudy right now.

    • @InDemocracyWeTrust
      @InDemocracyWeTrust Month ago +19

      Yeah I pay for Claude pro because local LLMs aren't quite to Opus 4.6 level (reasoning/speed) yet (at least on my mac mini). It's quite possible that these data centers will eventually be specialized hubs for medical research, military, or government operations. The government isn't asking LLMs for recipes.

    • @brianmi40
      @brianmi40 Month ago +3

      Then you have no clue about Agentic AI or OpenClaw and what those mean to the future.

    • @rickyspanish4792
      @rickyspanish4792 Month ago +2

      @brianmi40 doubtful, they will remain a massive security risk for quite some time

    • @brianmi40
      @brianmi40 Month ago

      Not "massive" by any means. OpenClaw has already reviewed the existing skills and has partnered with VirusTotal for skill security after industry giant Cisco built Skill Scanner.
      Picking apart the code for a sleeper skill is almost trivial already. Sooner or later you have to run the code embedded in the skill and you can see the errant calls or data transmission.
      We have a dozen or more competitors (known, could be hundreds laboring in silence), all with varying efforts to address any and all security risks. The race is on.
      If you KNOW what it's capable of, you will have no issue grasping the level of effort that is already going on behind the scenes to perfect it, make it stupidly easy for all and make it safe for the masses.
      We now have AI able to outperform humans in code testing for flaws and exploits. There's NO QUESTION that this ability won't be pointed at all skill creation and sooner rather than later if many aren't already in a testing/improving cycle.
      The first person to bring to market this ability both safely and capably for the masses will start a unicorn company. The race is on and the user base for all companies providing it will blow right through 10 million users overnight to and beyond 100 million users.

    • @boardskins
      @boardskins Month ago +1

      ​@rickyspanish4792and when has that stopped anyone who wants wealth?

  • @boboharperoldbobostillhere7588

    Napster didn't kill the profit model, it was the digitizing of music and the ubiquitous MP3 format and the internet that did. Once people could trade .mp3 files all over the place, no one needed to buy music anymore. Napster just took advantage of the tech that change it. It was also things like incredible amounts of storage available on flash drives and smart phones which allowed users to download a ton of MP3 music and take it with them. As an electrical engineer in the early 80's, we'd sit around and talk about the future of music and video. CDs were just starting to come out and we used to say the only thing keeping "record" sales afloat was the lack of cheap, easy huge amounts of storage to keep your digitized content on. It didn't take long for Moore's Law to allow that to happen.

    • @dans5033
      @dans5033 Month ago +4

      Napster absolutely killed the profit model. Without a widespread, decentralized platform to share music p2p, adopting mp3 files (or any other compression format) as your primary way to consume and share music would have not exploded they way it did, and adoption would have not reached the critical mass fast enough to overturn the industry: smaller groups of 'pirates' or 'bootleggers' would have been aggressively targeted with litigation, and niche hardware and software would have been hamstrung by lawsuits. CD-R drives have a massively larger hand in destroying the industry than a single compression codec, but again, they would have never been used to the extent they were for music without Napster. Smart phones had literally nothing to do with the switch to digital, they didn't appear for years after digital music files became the primary way people listened to music. Ipods were massive until smartphones killed them, and the Ipod was only created in response to the trend.
      Music being able to be digitized and easily stored/shared en masse certainly paved the way for the value of copies of recorded music to become nothing, but Napster was the tsunami that introduced it to the masses and created a market and demand for software and hardware support that the industry couldn't possibly fight.

    • @OnesecoMedia
      @OnesecoMedia 28 days ago

      CD-RW and I are offended by the lack of respect

    • @mikelenox7999
      @mikelenox7999 20 days ago

      I used to dream back in school in the 80's of being able to listen to my home music collection on my walkman remotely.

    • @robscopeland
      @robscopeland 12 days ago

      @dans5033 Napster exposed that a majority of albums released on CD only had ~1 to 2 good songs and the rest was filler.

  • @raytsh
    @raytsh Month ago +840

    Running an already trained model locally is something else than training a new model locally. Training is what requires such a huge amount of processing power.

    • @visionofdisorder
      @visionofdisorder Month ago +25

      No it doesn't. Data centers are mostly for handling everyone's requests. Only partly for training.

    • @Adunc67
      @Adunc67 Month ago +10

      Big chinese companies do the training and open source the weights

    • @shanescott8241
      @shanescott8241 Month ago +8

      Okay, but haven't we already hit diminishing returns with chat bots? My dinners aren't that extravagant

    • @Adunc67
      @Adunc67 Month ago

      @shanescott8241 lol but actually not really. ever since o1 (one of the first LLM thinking models) was released at the end of 2024, big tech has been scaling inference hard. On ARC-AGI-1, one of the hardest AGI benchmarks, we've gone from GPT 4.5 achieving only 10% to GPT 5.4 achieving 94.5%. GPT 4.5 was released the start of last year, GPT 5.4 like a week ago...

    • @victorcanesin8978
      @victorcanesin8978 Month ago +37

      I agree, in Zhenya Ji et al., 2025, the authors highlight that, although training is more energy intensive and harder to be executed in a distributed way, different estimates says it accounts only for 40% (Google), 35% (Meta) or even 20-10% (NVIDIA, AWS) of the total energy consumption (the other part being used for inference).
      So use local models!

  • @ITheFight
    @ITheFight Month ago +5

    I watching this and get an Ai ad lol

  • @PowerTree-007
    @PowerTree-007 Month ago +341

    Heaven for me was a TEAC 4 channel reel to reel tape recorder.

    • @andrewelliott4436
      @andrewelliott4436 Month ago +3

      Nagra 4.2 + BMT3 Mixer and X - Tal synch + QSLI Pilot Playback.

    • @AmplifierDotCom
      @AmplifierDotCom Month ago +26

      Right or the porta studio 4 track was magical enough

    • @brianmi40
      @brianmi40 Month ago +9

      First was a 4 track Portastudio on cassette for me. Running Cakewalk DOS for MIDI and SMPTE sync!

    • @AmplifierDotCom
      @AmplifierDotCom Month ago +2

      Anyone else rock the Akai MG 12 track combo

    • @mlaprarie
      @mlaprarie Month ago +1

      I upgraded from a Teac A2340 to an Otari MX5050 8 track. Then sold both machines and got a SEk'D ARC 88 A/D converter card that let me record 8 tracks at 24/96 on my hard drive and mix in Samplitude Studio. My wife was extremely grateful, LOL.

  • @fillmore999
    @fillmore999 28 days ago +3

    Rick showed me this and Angine de Poitrine on the same day. Life changing.

  • @BookishNerDan
    @BookishNerDan Month ago +1036

    If someone can't write a simple email to their boss explaining why they won't be at work, then there is no hope left for our species.

    • @chrishelbling3879
      @chrishelbling3879 Month ago +65

      ...the reply comes back from the boss' AI. The 2 people have probably never met.

    • @RetroPixelDen
      @RetroPixelDen Month ago +11

      It is not just about not being able, it is about saving time.

    • @althejazzman
      @althejazzman Month ago +27

      @RetroPixelDen You will stop being able because you'll rely on this time saving measure constantly.

    • @DougieDale
      @DougieDale Month ago +3

      Should phone in, like when they sack U,😮

    • @PeteQuad
      @PeteQuad Month ago +1

      ​@althejazzman True for some people but not all.

  • @matthewrichardharris
    @matthewrichardharris Month ago +1

    The Digi 001 changed my life, threw me into the world of Protools in 2001 and I never looked back!

  • @0laughlines0
    @0laughlines0 Month ago +17

    Is there an Ai agent that can get a home printer to print? That's the one I'm investing in.

    • @dennisbarrington221
      @dennisbarrington221 26 days ago

      This is also a winner topic to respond to when asked in an interview “what did you have the most trouble with on your last job” question.

    • @sfmusicscene1249
      @sfmusicscene1249 25 days ago

      Step 1) buy a Brother printer (i.e., DO NOT buy any HP printer), Step 2) make sure you have the latest drivers. That's it!

  • @10akee
    @10akee Month ago +21

    0:11 I remember the Digi 001. Game changer. A few years later (still in the mid-2000s), I got the Digi-002, Mac G5, Avalon Mic Pre, and a U87 for my home studio. I was one of many that found that gear combo pretty much killed recording studios for tracking vocals.

  • @russt4882
    @russt4882 Month ago +5

    Local models are great and I love LM Studio, especially paired with Docker Desktop full of MCP tools. For a chat bot at home, maybe it's all you need and the privacy points are real. What you're missing is that the capability of these models is distilled from much larger models that can only be trained in a datacenter. Qwen is also Chinese so keep that in mind. From the perspective of a tech person this sounds like is an old doctor looking at X-Rays in the 70s and saying it's all the imaging we need and computed tomography and magnetic resonance are just buzzwords. The applications for the AI created in these datacenters is just starting to emerge and it goes far beyond chat bots.

  • @Altek1
    @Altek1 13 days ago

    It's rare that you see a older gentlemen understand this in such detail. You're a smart man and I'm glad you're here to teach the masses.

  • @LuciFer-c1c7o
    @LuciFer-c1c7o Month ago +34

    The problem with your theory is that the data centers aren't being built to provide recipes. The AI that is, and will, be used in medicine and the military will require computing power far beyond your laptop at home.

    • @ckatheman
      @ckatheman Month ago +14

      LLMs do not scale. Once the compression was released (which is how it “advanced” so quickly, it’s 100% dependent on the injection of new novel human created material to generate novel content. No amount of CPU can overcome that. If too many people fail to produce new content because they are reliant on these tools to function, the model will eat itself as will our society. Fortunately it’s all hot garbage and people are figuring that out quickly

    • @whateverwhenever8170
      @whateverwhenever8170 Month ago +5

      As someone who administered massive arrays of machines, the demand will always exist for the facilities, my guess is law firms will be the next to dump massive cash into this. The multiplier is enormous.

    • @fallenshallrise
      @fallenshallrise Month ago +7

      @ckatheman This. The models are just ingesting original works and recombining it. They are already training off of AI created content and are starting the process of recursively eating themselves alive.

    • @joeyoungs8426
      @joeyoungs8426 Month ago +3

      @whateverwhenever8170 Absolutely true. I administer a large auto OEM’s HPC and I believe they will eventually do the work this cluster does off prem. Our last build out earlier this year is likely the last time they’ll drop 10M on a cluster. When this cluster is fully depreciated in about five years it will probably be more cost effective to do their aero, cfd and crash analysis off prem.

    • @brianmi40
      @brianmi40 Month ago +2

      Partially correct, yes those, but moreso Agentic AI / OpenClaw, etc. Agentic AI use will DWARF all chat to make it a ROUNDING ERROR in tokens consumed. One tester has burned through a BILLION tokens with his OpenClaw. That would take 7 MONTHS on Rick's computer...

  • @IanSwope
    @IanSwope Month ago +111

    Would love to see you have Trent Reznor on for this discussion as it would be really insightful.

    • @danacoleman4007
      @danacoleman4007 Month ago +2

      as long as he doesn't "sing" 😂😂😂

    • @ThomasJLarsen
      @ThomasJLarsen Month ago +5

      Rick Beato has made me the invisible man because I have written about how music died in the early 1980s, when producers dropped composers, arrangers and musicians and then did it all themselves.
      However, I believe I am visible if I reply to other people's comments. Could you please just acknowledge to be able to see this comment. I am curious if it is visible. Thank you.

    • @friarkhan
      @friarkhan Month ago +4

      @ThomasJLarsen You wrote about your issue in a way that made me think you were delusional at first, haha! Is Rick Beato a mad scientist running experiments on people whose comments he dislikes? Do you have to wrap bandages all over your face so that you can be visible in public? 😆
      ...But your comment was visible enough that I was able to reread it and realize he just has blocked your non-reply comments such that they're invisible.

    • @Naindurth
      @Naindurth Month ago +3

      @ThomasJLarsen sadly I can't see you.. but I can read what you write!!! yaaayyyy

    • @writerteacher1
      @writerteacher1 Month ago +1

      @ThomasJLarsen HG Wells, is that you?

  • @Igbon5
    @Igbon5 Month ago +70

    Rick Beato as an AI ambassador, very interesting.
    You can can run these models at home, you certainly can not create and train them. That still needs the giant room supercomputer.

    • @pootzmagootz
      @pootzmagootz Month ago +9

      You can absolutely train them at home. Obviously, you need a strong computer, but people train them at home all the time. They're not nearly as large as the big AI companies' models, but 32gb models are about the limit of some home-trained models I've seen (for image generation)

    • @fvotava
      @fvotava Month ago +4

      The same way is hard for someone to develop, at home, an interface connecting a guitar to a computer. But, since someone has already developed this, you can use it at home. That’s the same thing.

    • @robmacl7
      @robmacl7 Month ago +1

      Last I heard openai is spending about 3/4 of its compute on training. Eventually most compute with be for inference (user tasks), a lot of that will be local.

    • @NoidoDev
      @NoidoDev Month ago +6

      Fine tuning is training, just not from the scratch, and you can do that with small enough models. This has been so for at least a few years. Of course it's very expensive with bigger models. Most people just don't know anything about AI, despite the information being freely available.

    • @MarkMatthewsNJ
      @MarkMatthewsNJ Month ago +5

      my analogy for Rick is... its like saying "I bought a guitar so now I can play every song known to man". No.. it doesnt work that way.

  • @slobodanudarac5
    @slobodanudarac5 23 days ago

    Thx a million, Rick! Love you ❤

  • @HEISENBERG-EVIL
    @HEISENBERG-EVIL 25 days ago +6

    You had me at Digi 001 / PowerMac G3 in Frutiger Aero / Liquid Glass design 😂❤

  • @InDemocracyWeTrust
    @InDemocracyWeTrust Month ago +152

    Data centers will be used for massive-scale industries (medical research, military logistics, or high-end film VFX) that require more power than a local machine can provide.

    • @RAEckart22
      @RAEckart22 Month ago +13

      True, music did not get more complex. But science/military/medical/VFX do.

    • @jay.dee.g.97
      @jay.dee.g.97 Month ago +10

      and mostly algorithmic surveillance and fingerprinting people

    • @brianmi40
      @brianmi40 Month ago +3

      To say nothing of basic corporate uses. Wendy's AI to take your order won't be running on a home PC...
      And that "massive scale" that is coming, is coming like a FREIGHT TRAIN: Agentic AI / OpenClaw and all those suggest for the future. Google didn't DOUBLE their data center construction budget this year for no reason.

    • @mgoboski
      @mgoboski Month ago

      misused*
      Fixed it for ya

    • @jamesallen176
      @jamesallen176 Month ago +2

      @brianmi40 - OpenClaw already works with locally hosted models.

  • @DocFlay
    @DocFlay Month ago +49

    We were using live capture and digital processing on Amigas and Ataris in the 80s for a lot less.

    • @Coxtoasten12
      @Coxtoasten12 Month ago +8

      I had an Amiga 2000 running Deluxe Paint. I thought at the time it was the greatest thing ever. Making vids on Paint and getting them on VHS.

    • @michaelwallace4298
      @michaelwallace4298 Month ago +4

      True, I remember fondly my Atari's - but recording was pretty much solely midi interface linked to external recording. There was not enoughRam or Hard drive space for serious recording of Analogue. Ram was maxed to 4 Meg - and was expensive. The hard drive on the studio computer - the Atari Mega with 4 meg memory - 40 Megabytes. Huge at the time for a home PC. That said, linked in with the Akai 1214 desk and 12 track recorder, more that workable for a home studio. I still use Cubase!

    • @althejazzman
      @althejazzman Month ago

      @michaelwallace4298 I had an audio capture card for an Atari ST with a wav editor. Processing anything normally warned you that it might take several days!

    • @hepphepps8356
      @hepphepps8356 Month ago +1

      At 1/50 the quality for 5 seconds. Even nowadays the difference between «what average people might need» and what a flexible, professional top-quality solution cost is where the difference in money is. Even if your home setup can do 85% of it. Applies to any media. Live events isn’t run on cameras with mini HDMI to HDMI adapters, USB microphones or elgato-stuff. If you are recording a small orchestra or a really high quality band date, you still need the $2000/day studio.
      And yes obviously, binding large medium and large companies into large enterprise solutions for AI, where the cost of licences will creep upwards until they have a constant squeeze on every dollar earned in all of large business is their whole idea. ChatGPT et al, as enshitified themselves out of reach for normal people allready by making their lower tiers worse.

  • @ZephyringMusic
    @ZephyringMusic Month ago +1

    Take me back to the 70’s please

  • @gabrielebulfon
    @gabrielebulfon Month ago +54

    At the moment local models are no way near to Claude or whatever. Your local model is outdated already, unless you’re able to train it with new data. The big change will be when local hardware will be capable of learning new data in real time. It’s not now.

    • @ryosukejoe9615
      @ryosukejoe9615 Month ago

      2030

    • @kurono1822
      @kurono1822 Month ago +1

      You can build RAG and MPCs locally

    • @gabrielebulfon
      @gabrielebulfon Month ago +1

      @kurono1822RAG is not training. It gives you much less control and it’s so difficult to design.

    • @krusher74
      @krusher74 Month ago +11

      people just think all the answers AI give them are correct, the few time i have tired it out i've asked it things about the suject i know alot about and it just given me vague or wrong googled answers.

    • @gabrielebulfon
      @gabrielebulfon Month ago +8

      @krusher74you used free models. Pay for a pro and you’ll be surprised.

  • @Yahoomediaclub
    @Yahoomediaclub Month ago +6

    They pulled the plug 🔌

  • @ChristineKenyon
    @ChristineKenyon Month ago +37

    I had both of those Mac towers sitting there. What a time! I was recording at home, and in a band, it felt so revolutionary, because it was!!

    • @brentonfeinberg9298
      @brentonfeinberg9298 Month ago +2

      Mac Flashbacks...I had the G4 between those 2

    • @chrisstout8451
      @chrisstout8451 Month ago +1

      I still have and use the Blue and White Mac along with an M Audio Delta 1010 and running Opcode’s Studio Vision Pro. I use it in tandem with a rack of midi modules. Still functions great. It’s nowhere near the top in sound quality but for a budget hobbyist, it’s great to work with. I do master everything down using a more up to date MacBook Pro running more current software. I just found it interesting that he still has those computers.

    • @open_sky_creative
      @open_sky_creative Month ago +2

      Same! I was doing multitrack digital recording on my Mac Performa in 1994! It was so much cleaner than the cassette 4 track. But it could crash and lose HOURS of work in an instant.

  • @decress
    @decress 23 days ago

    Rick, I love how your videos just start and you get right to it. There's so much ado in most videos, including "coming up" peeks and then an intro video - like they're 60 MInutes or something. Keep it up - strong work.

  • @ZapAndTroy44
    @ZapAndTroy44 Month ago +31

    Late 90's I had a Compaq computer with a SoundBlaster sound card and interface, with Cakewalk. Could record up to 6 tracks before exceeding the Ram capabilities.

    • @deerock7
      @deerock7 Month ago +3

      yeah I remember that card...great note.

    • @elipuebla2537
      @elipuebla2537 Month ago +6

      Sound blaster and cakewalk are still excellent music making tools to this day , if the system still functions and best of all , OFF LINE.

    • @DaxDoesDigital
      @DaxDoesDigital Month ago +1

      😅 OMG. 😳 I haven’t thought of SoundBlaster in years! Flashback!

    • @gregb8565
      @gregb8565 Month ago +1

      Late 90s ;?? Motu and then. Later pro- tools around way before then but yeah compute is and will be the issue
      Quantum computing is the real game changer

  • @riche.6660
    @riche.6660 Month ago +114

    The latest Bruno Mars single sounds like someone asked AI to write a Bruno Mars song.

    • @patomaniaco
      @patomaniaco Month ago +16

      * album

    • @AlexaDigitalMedia
      @AlexaDigitalMedia 20 days ago

      Bruno Mars asked Bruno Mars to write a song using AI. Of course I’m joking, but I soon won’t be. It’s inevitable, bro. This is the direction it’s going as quality continues to improve in leaps and bounds. It’s too profitable to fail.

  • @Yahoomediaclub
    @Yahoomediaclub Month ago +5

    Had the 002 Console Rick 24 yrs back

    • @ovivan79
      @ovivan79 Month ago

      My dad is still using his 002 rack to extend an Apollo Twin via ADAT. It’s good gear.

  • @shakti.rathore
    @shakti.rathore 26 days ago +1

    Wow. In 2000 I was 9.. how awesome is to have experienced folks like you sharing their views here . Gratitude

  • @KarstenJohansson
    @KarstenJohansson Month ago +62

    Jean-Michel Jarre recorded Oxygène in his kitchen and dining room back in '76. He was ahead of his time in a number of ways. Since Y2K, the music industry caught up to him. Now we're onto the next phase (good or bad as that may be).

    • @jetfueled2563
      @jetfueled2563 Month ago +14

      JMJ was my gateway into another world of music ... first heard Oxygene on a souped up quad system in a friend's car during a blinding snowstorm ... and never looked back.

    • @KarstenJohansson
      @KarstenJohansson Month ago +9

      @jetfueled2563 When I was in high school, it was already retro. But I used to walk home from work with Oxygène playing on my Walkman. Very entertaining. Who needed drugs when you had that atmospheric 3D-sounding stereo swirling around in your head? I learned a *lot* about how to use ambient mixing and big-ass delay from listening to MJM.

    • @McSlobo
      @McSlobo Month ago +2

      Watch "Jean-Michel Jarre - Live in Sevilla - ARTE Concert" starting from 55:20. He has not changed. He has always been interested in new technology.

  • @hanspeterlillese2225
    @hanspeterlillese2225 Month ago +1981

    An insane amount of money and energy consumption for something we don't really need.

    • @-Thunder
      @-Thunder Month ago +31

      I dunno. My GF called tech support for a specialized software problem and they couldn’t help her after an hour of trying. Then she asked ChatGTP and she got it solved in 5 minutes. AI filmmaking is coming for a fraction of the cost and no Hollywood gatekeepers will stop independent voices. Do we “need” it. I guess not. But we don’t “need” A LOT of stuff depending on your definition. Of course, the real reason the big guys won’t go out of business is AI is a national security issue and they will be generating their own power. It’ll be interesting to see if they generate it more efficiently than ever. They’ll be recycling their own cooling water as well.

    • @dwaynewladyka577
      @dwaynewladyka577 Month ago +5

      Agreed! Cheers, Hans! ✌️

    • @JDGauchat
      @JDGauchat Month ago +15

      Yes, we need it. Any company or country that doesn't develop or adopt AI is going to disappear or be controlled by those that do.

    • @stephenvalente3296
      @stephenvalente3296 Month ago +15

      @JDGauchat Until their computers go down, then the old hands will keep working the old way, using their brains instead of expecting a computer to tell them something that's been centrally curated.

    • @hanspeterlillese2225
      @hanspeterlillese2225 Month ago +13

      ​@-Thunder Don't forget that most software problems arise because we are being dragged by the nose into doing more and more digitally, which could just as well be done in another way.

  • @philsbigboxofwhatever
    @philsbigboxofwhatever Month ago +13

    I still have my dad's Fostex 4-track from the 80s. Still a dope piece of equipment after all these years.

  • @chuckwright8889
    @chuckwright8889 Month ago

    Rick, you've always entertained. Tonight you INFORMED ....thank you.

  • @ArmenChakmakian
    @ArmenChakmakian Month ago +5

    Stay tuned for the new Chef Beato channel!

  • @OccultFanOFFICIAL
    @OccultFanOFFICIAL Month ago +23

    We clearly still love and need you when you're 64, Rick. This is a watershed realization. Good job, Bro.

    • @pcatful
      @pcatful Month ago

      What does it change?

    • @bradlyscotunes9156
      @bradlyscotunes9156 Month ago +1

      ​@pcatful
      If you didnt grok that from what Rick said/did, there's no helping u.

    • @maksmakes
      @maksmakes Month ago

      @pcatful everything

  • @figlermaert
    @figlermaert Month ago +8

    Problem one with the boss vacation email… you ask your boss vs tell your boss 🤪

    • @andyharpist2938
      @andyharpist2938 Month ago

      Yup. Crude and stupid. Like so many AI fake videos. ..."Sir Winston Church Hill said at the time."

  • @joshpagano
    @joshpagano Month ago

    Incredible video! Thank you for this!!

  • @MarkHaynesandfriends
    @MarkHaynesandfriends Month ago +11

    I recall that in the late ‘90s Macromedia had a software tool named Deck II which allowed users to record digital multitrack audio directly into a Mac.

    • @vsoproductions
      @vsoproductions Month ago +4

      Macromedia ruined Deck II, then Apple bought it and killed it. OSC was the original developer. It was wonderful especially when Digidesign said their hardward would only do four tracks per card. OSC doubled that count with just their software.

  • @psivewri
    @psivewri Month ago +27

    I love that we've both got G3 and G4 powermacs in the backgrounds of our videos 😂

    • @mdrumt
      @mdrumt Month ago +2

      Bet you his doesn't smell as Eucalyptus as yours! 😅

  • @bill-wowzer
    @bill-wowzer Month ago +8

    I loved the look and maintainability of the PowerMac G3 towers! So easy to open up and upgrade RAM, hard drives, or peripheral cards. 👍

    • @papalaz4444244
      @papalaz4444244 Month ago

      ok AI generated advert

    • @ReidDesigns
      @ReidDesigns Month ago +2

      It was beautiful.

    • @bill-wowzer
      @bill-wowzer Month ago +1

      @papalaz4444244 dammit! You figured it out! I’ve still got a pallet of these 25 year old computers that I have to sell and I would have gotten away with it if it weren’t for you nosy kids! 😂

    • @Strange-Songs
      @Strange-Songs Month ago

      @papalaz4444244 AI generated advert...for a product that does not exist?

    • @charcoalgriller
      @charcoalgriller Month ago +1

      lift the latch, open the machine. It was great...until my house got hit by lightning.

  • @JMCava
    @JMCava 24 days ago

    What I love about Rick Beato that this episode shows is that I don't feel he's pushing an absolute Right or Wrong on a topic, but he has the skill - and enjoys - holding an issue like this up to the light for people to think about, discuss and debate (as in the many thoughtful comments), without having a personal or financial stake in the issue. So refreshing.

  • @HappyT234
    @HappyT234 Month ago +6

    In 1996 I bought a program to record music called Saw Plus. The software cost $800 and was on one floppy disk!

    • @AtticusDraco
      @AtticusDraco Month ago

      😆 1 disk?! That's kinda hard to believe!

  • @JoePiotti
    @JoePiotti Month ago +162

    I think most people don’t realize that you need a big data center just for training a model. But to run a model, you just need a fairly decent PC they call this an inference computer.

    • @leucome
      @leucome 29 days ago +2

      Yes, also as soon as somebody figure out a way to split the training in multiple small part then the data center is not even required anymore. If so, everybody could built custom AI from small parts trained independently. One promising way would be to use the continuous thinking approach were all independent small expert model write into a scratchpad that they pass to each other. Look for HRM 27M model beat GPT, they used this technique and got really good result with a tiny model.

    • @walterdeminicis737
      @walterdeminicis737 29 days ago +1

      WIth a fairly decent PC you run a model that has very little context window compared to what you may be used to from using ChatGPT or Claude

    • @bender8100
      @bender8100 28 days ago

      ​@walterdeminicis737 We're looking at the problem from the wrong angle. Artificial intelligence is currently completely useless. It's not me who says this, but the numbers themselves. Could we create encrypted, open-source models that would provide some kind of support for everyone? Maybe.

    • @isojamo4756
      @isojamo4756 27 days ago

      @leucome Seems blockchain would be ideal for this.

    • @francescobondini3051
      @francescobondini3051 27 days ago

      @walterdeminicis737but when will the average user need such big context windows or such deep “thinking” capabilities? Come on it’s just the CEOs of these companies trying to convince you that you need all of this but you don’t really need it.

  • @dawmix
    @dawmix Month ago +9

    I was in the printing business back then and same thing happening. Suddenly these imaging businesses started popping up and we could send them files to output to negative and separations. Then PDF came out from adobe and everything moved toward digital and all those expensive imaging centers disappeared after spending all that money on equipment!

    • @davidrennie8197
      @davidrennie8197 Month ago +2

      Scanner drum and 4-colour reprographics went in the bin no long after clients started doing the work on their new Mac computers

  • @carymichaelayers6803

    good points on the data centers. Keep rockin', Rick!

  • @3v3rb0t
    @3v3rb0t Month ago +10

    But if they don't have the data, how do they continue to train the models?

    • @TheGatorDude
      @TheGatorDude Month ago

      They are already models within the singularity, creating and training themselves (self improvement). The human element is shifting already to governance and integrity as these agents don't require our data anymore because they can make it themselves.

    • @chriscaudle2792
      @chriscaudle2792 Month ago

      And training is much more resource intensive than running the inference models. Perhaps the AI companies will have to rely on sales of trained models (or weights for open source models) rather than actually running the inference engines.

    • @krusher74
      @krusher74 Month ago

      it not really Ai if its not thinking for itself.

    • @brianmi40
      @brianmi40 Month ago +1

      They create synthetic data.
      "In modern Large Language Models (LLMs), synthetic data for ongoing Reinforcement Learning (RL) is created through iterative loops where models generate their own training signals, reducing reliance on human annotation. This process typically follows a Self-Improvement or Reinforced Self-Training (ReST) paradigm."

    • @patientzerobeat
      @patientzerobeat Month ago +2

      @krusher74 no current AI thinks in any capacity; it's all just a very sophisticated predictive generator. AI engineers like Yann Lecun (who pioneered the "transformer", a fundamental part of the LLM method) have lamented the fact that because LLMs have been so [superficially] impressive with a huge WOW factor, that it will set back true development of "proper" AI because so many people think "we're almost there!". I can a imagine a time, perhaps decades from now, when it just seems silly to say that we had AI in the late 2020s. Metaphorically, I don't even think we're at a point now where it's the Wright Brothers flying that first airplane compared to jets we have now. It's more like hot air balloons, which kinda looks like flying and is impressive as hell if you've never seen a human up in the sky, but ultimately it's not exactly part of flight science and aerodynamics etc.

  • @Darkandstorme
    @Darkandstorme Month ago +74

    Planning a trip is absolutely part of the fun and adventure for me personally

    • @McSlobo
      @McSlobo Month ago +2

      Most people are super lazy. When you realize that it opens many doors for making money if that's your thing.

  • @Thefamiliaguy
    @Thefamiliaguy Month ago +17

    Using a LLM and creating one capable of compiling all the data needed to make a substantial one i have to imagine require different computer infrastructure horsepower.

    • @brianmi40
      @brianmi40 Month ago

      Not really true technicall, but we are starting to see the first segmentation where chips are tuned better for inference than training, so it will be a growing thing. Agentic AI and OpenClaw tell us we need > 4X data centers for all the demand headed our way, so Rick just doesn't have the background.

    • @johnmahoney5393
      @johnmahoney5393 Month ago +2

      @brianmi40Keep drinking the Kool-Aid. Who made those projections?

    • @brianmi40
      @brianmi40 Month ago +1

      Google has TWO Os in it. I'm not going to spoon feed some insulting M0R0N.
      YOU get off your @ss and go figure out WHY Google very near DOUBLED their AI data center construction budget for this year, FAR exceeding even the MOST aggressive analyst predictions.
      Or, maybe figure out what the hell SaaS-pocalypse was and why it erased $1T in the stock market overnight.

    • @officebreakgaming1555
      @officebreakgaming1555 Month ago +2

      @brianmi40Anti-AI hype is the new hype, not to mention, wishful thinking. I remember how people thought that the internet was a fad in the early 90s.

  • @jessebillson
    @jessebillson 29 days ago

    Figured it out last night, made a video today. Sure seems like the most knowledgeable guy on THIS topic.

  • @csu111
    @csu111 Month ago +334

    Your video helps to confirm that these monster data centers are not intended primarily for citizens. We’re paying for military, government, and surveillance technology. Basically control.

  • @jacob.munkhammar
    @jacob.munkhammar Month ago +31

    What you say makes sense. But there is one thing missing: Someone needs to make the LLMs. They (still) require a lot of data and computing power. (Or am I missing something?)

    • @markshveima
      @markshveima Month ago +1

      I had the same thought. The data centers would still be needed, I would think.

    • @robertweinmann9408
      @robertweinmann9408 Month ago +2

      Yes, and the LLMs will also need to be updated over time to keep their accuracy. Who takes care of that?

    • @muthahumpa2715
      @muthahumpa2715 Month ago +5

      Also how will it continue learning if it’s not connected to the net.

    • @johnmahoney5393
      @johnmahoney5393 Month ago

      Do they not have enough data centers for training, yet?

    • @johnmahoney5393
      @johnmahoney5393 Month ago +2

      @muthahumpa2715Seriously? Rick’s computer was only offline to prove that processing was local.

  • @JDGauchat
    @JDGauchat Month ago +12

    The models have to be trained, so they are still going to charge you for that. Things are going to change fast, but there are always going to be better options available that you pay for.

    • @brianmi40
      @brianmi40 Month ago

      The "better" options are FREE. Gemini Flash is 100% free, 4x faster than what Rick showed on mid-range Apple silicon, and has Internet Search for latest info as well as tool calls. No learning/installation/firing it up, just open a browser tab...
      Sure, WANT to learn about AI? go for it, run inference at home. But for the silly questions he asked there's zero reasons for the average person to expend all that effort to get a poorer answer after more effort at home than using a frontier 100% free model online.
      So, NO, data centers won't go "idle" due to home inference. No one that's seen or has even a tiny clue about Agentic AI or OpenClaw would ever even think such a silly thing and that ignores that NO COMPANY will run their company AI on "employees home computers!!!".

    • @boardskins
      @boardskins Month ago

      There is a point where you can only reach a certain level of music production and fidelity.

  • @JonasStuart
    @JonasStuart Month ago

    Very intersting. I'll be downloading and trying one of these this week. Thx.

  • @robertdouble559
    @robertdouble559 Month ago +5

    I would say that revolution started in 1993 with Atari St 1080 machines running EMAGIC Logic Audio with a 4 input audio card. We used to stripe them to the 24 track tape machines, for running midi stuff and 4 tracks of digital audio trickery

    • @dfbaerwald
      @dfbaerwald Month ago

      I remember that process. Ugh.

    • @DROPTHEGRID
      @DROPTHEGRID Month ago

      Putting the MIDI port in the ST was a real winning move. Amiga needed a extra box.

  • @bab008
    @bab008 Month ago +14

    I remember paying $250 an hour in the mid-1980's for studio time. Ouch! That's like $1000 an hour now.

    • @MikeWong-ev9zq
      @MikeWong-ev9zq Month ago

      You got robbed. Lol.

    • @clsieczka
      @clsieczka 7 days ago

      Right, I recall even 300$. It would cost a band thousands just to put a simple demo tape together. Good times bad times.

  • @1995ssc
    @1995ssc Month ago +14

    Yes, running an LLM (even a big one) on a home computer is feasible (especially if you have a big GPU). But you still won't be able to train your own models on one. Training requires enormous amount of compute power and memory bandwidth. Those massive datacenters are being used to train the LLMs not really run them. For example the GPT-3 model with 175 Billion parameters took 280 GPU years on a $10k GPU (Or 10 days on a cluster of 10,000 GPUs). GPT-4 has 1.8 Trillion parameters.

    • @jarzyn15
      @jarzyn15 Month ago +1

      @RickBeato pin this comment to the top.
      Add more context to your video.
      Please do something to present full and real information.
      There's many great video on youtube about LLM and NPU, I'm afraid this one is missing a lot of context.

    • @visionofdisorder
      @visionofdisorder Month ago +2

      I don't know why people keep repeating this like it's fact. The data centers are NOT primarily for training models, they are for handling requests. Only about 30% is for training, which is still a lot, yes, but it's not the main purpose.

    • @1995ssc
      @1995ssc Month ago +1

      @visionofdisorder And YOU missed my point. Datacenters are required for training. You cannot train large models on your home computer or even your on-prem servers. Sure, fine, datacenters also distribute the load of millions of model queries from users. But each of those are computationally trivial compared to training tasks and can be done anywhere. So I disagree with you, the primary purpose of large datacenters are for tasks that can't be done elsewhere regardless of how much of the compute time is used for those complex tasks.

    • @lukassmida4885
      @lukassmida4885 Month ago

      @visionofdisorder Yes they are primary for training, because big AI companies business model is not build around online request handling.

    • @F4c2a
      @F4c2a Month ago +1

      Yeaaah... And like, the stuff he asked, just typing it in google gives you the answer, no AI sub required... But what if I want to create pics and video from prompts? What if I want the AI I chat with to have a memory big enough to know what we talked about last week? My PC can't pull that off, unless I had like a +5k PC, right?

  • @dennisbarrington221
    @dennisbarrington221 26 days ago

    This is the most important AND useful video I’ve ever seen on RUclips.

  • @Darkoryou
    @Darkoryou Month ago +26

    I have just bought a barebone mp3 player. No wifi just an SD card with your music and audiobooks. I love the distraction free listening!

    • @gabibonza
      @gabibonza Month ago

      It has RUclips too, apparently. 😂

    • @kobrewing
      @kobrewing Month ago +3

      I bought a cheap Chinese phone with an SD for music. I don't use it for anything else. It's great and works anywhere without ads. I also think it's funny since car makers steal your phone info for sale when you pair it to your car. I don't pair my real phone and they get nothing but an mp3 phone 🤣

    • @timn5008
      @timn5008 Month ago +1

      @gabibonza No, his phone has RUclips. Or his desktop or Laptop.

    • @yelllownine
      @yelllownine Month ago

      🙂👍

    • @kbstabs5982
      @kbstabs5982 Month ago +1

      If a MP3 player has a SD card, it is not barebones!!

  • @stillavenue
    @stillavenue Month ago +341

    This is why they want to kill the personal computer market. They want us to have barebones systems that require us to connect to them to use these models. Watch home computing advancement completely plateau and even start reversing. We’ll stop getting newer powerful hardware and start getting cheaper and cheaper and less powerful machines.

    • @samsung-ok4ki
      @samsung-ok4ki Month ago +13

      Yes the PC is their Kryptonite

    • @l30n.marin3r0
      @l30n.marin3r0 Month ago +7

      Don't use it. Everything people have to do is: shut it off.

    • @JoseLuisOchoaPadilla
      @JoseLuisOchoaPadilla Month ago +39

      Exactly! And they already started by making RAM unaffordable...

    • @rawdez_
      @rawdez_ Month ago

      yes 100% THIS, e.g. corps are killing PC gaming with overprice to force everyone to use GeForceNOW-datacenters to turn videocards-GPUs into overpriced monthly subscription.
      currently corps are just milking AI selling 0 progress junk hardware, selling upscale and fakeframes instead of actually improving GPUs, true-FPS performance and performance/dollar. nvidia is actually selling overpriced 5000 cards to themselves and putting it in financial reports to fake good sales to investors. because we are very close to actually realistic movie-level graphics in games. all they have to do is make AI GPUs where graphics is AI generated from primitive graphics (for cohesion). the GPU of the future is 99%+ AI cores and the rest is small weak rasterization cores just fast enough to run primitive graphics. OR just increase VRAM and connect SSD/DDR5-6 directly to a GPU chip for proper no long PCIe no lag DirectStorage = realistic models, textures, effects = realistic graphics. btw thats BoltGraphics approach but they've killed it with overpricing DRAM/SSDs.
      the problem is the GPU capable of actually realistic movie-level graphics its the endgame GPU, there's nothing to improve or sell after that each year for 10x overprice to noobs. also it'd make pro cards and servers for movie studios useless, kill nv and amd graphics businesses. and consoles. you'd be able to buy a GPU and use for 10-20years or more until it breaks no need for upgrade.
      Jensen leatherjacket Huang's solution for this is to kill PC GPUs/gaming with overprice altogether and force everybody to use GeForceNOW-datacenters. because you want that sweet better graphics, don't you.
      tldr nv won't sell next-gen realistic gaming in discrete GPUs, they'll sell it as a monthly overpriced subscription. because they need ROI on all datacenters they've built and building right now.

    • @rawdez_
      @rawdez_ Month ago +11

      @JoseLuisOchoaPadilla actually they started by making GPUs unaffordable first.
      nv to sell laggy GeForceNOW subscriptions, amd to sell 0 progress obsolete slow consoles/handhelds hardware - extend consoles cycles, milk the market by selling old high-margin 0 progress junk. amd produces basically all relevant consoles and handhelds so they like PC GPUs being overpriced and really not having a GPU market share at all as long as they can sell junk for consoles @10x overprice.
      there's a startup called BoltGraphics that threatened to release 10x faster raytracing GPUs than 5090 simply by using way more RAM. so overpricing RAM killed that. not to mention nerfing everything alltogether on PC/smartphone market by making everything low-VRAM/low-RAM junk or overpriced AF.

  • @Marco-HidalgoMusicRecords

    4:25 SO HOW DO I DOWNLOAD IT???????? 🤔🤔

  • @dr.python
    @dr.python Month ago +1

    Now this is music to my years.

  • @tonygebhard
    @tonygebhard Month ago +7

    Love your content, Rick! Thank you for putting the hard work into what you do for research.

  • @alexstronach2134
    @alexstronach2134 Month ago +8

    This is not a slight on Rick Beato, who clearly a bright guy who’s put a lot of thought into this, but I love that I’m getting insightful tech advice from a musician/producer. What a crazy time we live in.

    • @GiJoe94
      @GiJoe94 Month ago

      Things are getting hard. You have to adapt as a musician

  • @ioannisthemistocles9198

    I’ve used both self hosted LLM’s and the frontier models for my business as a software architect and system administrator. Unfortunately the local models require very expensive hardware. Right now it is much cheaper to use paid models vs running good local models. I am hoping that more specialized models that are smaller will become available, so more affordable hardware can be used. I don’t need a model that can create a chicken dish. I need one that is an expert in the technologies that I work in.

    • @cancel1913
      @cancel1913 Month ago +3

      Just give it ... time. 😀

    • @nosafewords
      @nosafewords Month ago +4

      Macs do really well for local LLMs because of the unified memory for graphics and cpu.

    • @user-xn3tt5pt4h
      @user-xn3tt5pt4h Month ago +1

      This is a great real world insight. The 'right now' clause is, I think, the essence, particularly against Rick's key points - that computers become more capable over time, and that privacy and control of personal data is of financial value, and therefor factors into the business case.

    • @guybayes
      @guybayes Month ago +1

      Sure give it 10-20 years and you can maybe do something on your local computer that takes a ton of distributed computer today. But in 10-20 years those data centers are going to be able to do stuff with models you can’t even imagine

    • @FuZZbaLLbee
      @FuZZbaLLbee Month ago

      Correct, and even the frontier models are far from perfect. Big tech is on a race to get models so good that they can improve themselves.
      Local models are good enough for some cases, but will be way behind for years to come. Also those labs might stop releasing them, like what is seemingly happening with Qwen from Alibaba

  • @ckpro75
    @ckpro75 28 days ago

    Thank you Rick. There really should be more people/youtubers/influencers like you so this message could be spread out even further.

  • @MoonshineDelight
    @MoonshineDelight Month ago +26

    Recipes? Tourism? Multi stage complex tasks fall apart on these small local models that hallucinate with confidence. Frontier hyperscaler models are the only ones that can handle large multi-step workflows with large context windows.

    • @loganmcmillan4178
      @loganmcmillan4178 Month ago +6

      Exactly - the tourism one wouldn't even have reliable up to date info when he's not connected to the internet.

    • @iansaunders4877
      @iansaunders4877 Month ago +3

      for now, yes. As agentic modalities evolve and rag tooling becomes more mainstream this is a relatively trivial issue. Just like this didn't happen overnight for the studios it will not happen overnight with AI, and key hyperscalers that build around an ad model will (unfortunately) continue to succeed and outcompete

    • @neilswherethelightis
      @neilswherethelightis Month ago

      @iansaunders4877 yeah, for the meantime, people are gonna keep using chatgpt and claude that constantly improve rather than the local ones

  • @hakanyakici8607
    @hakanyakici8607 Month ago +213

    Time to talk about Angine de Poitrine…

  • @ArmatekAutomation
    @ArmatekAutomation Month ago +7

    BTW Rick, watch Angine de Poitrine. If you didn`t do it yet. 😂 It`s a duo from Quebec, Canada.

  • @DrumAndStrum
    @DrumAndStrum 12 days ago

    Awesome demo and insight. I didnt know i could run these locally.

  • @johnnykarate_SweepLeg
    @johnnykarate_SweepLeg Month ago +385

    Meanwhile, everyone now wants physical copies of music.

    • @edwardlee2515
      @edwardlee2515 Month ago +11

      Alot of fans treat them more as merchandise like t shirts than as a way to listen to music. Skewed the vinyl market in a not so good way for those buying to spin on their turntables.

    • @troykelley6507
      @troykelley6507 Month ago +7

      Back in the day, everyone was telling me albums were great, and that digital CDs would never take over analog because analog was so warm. That was a good argument for a little while, but eventually, everyone went digital.

    • @marktait2371
      @marktait2371 Month ago +5

      Yeh agree with ed our local record store at least the one I used to go to about every month or so used vinyl cassettes to CDs very reasonable price but the new release and reissues continued to rise and still that way currently

    • @emanemantsal9789
      @emanemantsal9789 Month ago +22

      "everyone now wants physical copies of music."
      Hardly.
      Don't be fooled by stories of "a 125% surge in vinyl album sales over the previous year!!!""
      20,250 vinyl albums - up from 10,000 last year - is still NOTHING.

    • @madkykc
      @madkykc Month ago +28

      Not everyone, just a very vocal minority. Everyone is using streaming services without thinking twice about it.

  • @Haruchemy
    @Haruchemy Month ago +10

    You’re a hero. Did your homework, learned how its gears move, and now you’re much more optimistic. I remember commenting on the video of AI Music taking over and replied that all that slop only makes real musicians talent even more valuable and sought for. And here we all are. Listening to you.
    And subscribing and liking!
    You earn it every time!
    Cheers!

  • @Soeund_Goelum
    @Soeund_Goelum Month ago +5

    The Law of music is Music is born from the emotional experience of life and told through sound/s they feel best express and convey experience.

    • @taroman7100
      @taroman7100 29 days ago +1

      The technocrats don’t care about a real world. They’re playing with mud and play dough making us a new one. It will be sterile. Non private except for them eventually turning us into socialists or worse. It will be the great equalizer where humans as individuals will all be the same and have their fingers burned so as not to identify who we are.

  • @shineon2595
    @shineon2595 14 days ago

    Great video! Thank you!😊

  • @alter_ukko
    @alter_ukko Month ago +40

    Greetings from ROC NY. This is a really insightful take, Rick. I’m a software engineer who’s been following this situation closely. I also believe that local models are the future, but I never thought of the recording studio analogy. Brilliant. Apple, too, agrees with you, I think. Their upcoming machines are built around this type of workload. They need it to provide AI in a way that protects privacy, but it benefits us in other ways.

    • @cmflyer
      @cmflyer Month ago

      Is this why they have been troubling over Siri for so long?

    • @prisonbread
      @prisonbread Month ago +1

      I really hope you’re right and that there’s a damn good excuse for their development of AI to languish so. As this other commenter mentioned, Siri has been seriously neglected so hopefully there’s a good reason.

    • @TurdFergusen
      @TurdFergusen Month ago +2

      They will fight hard to ensure you cannot have local inference and are reliant on subscriptions that govt can also spy on.
      5090s? gone.
      cheap memory? gone.
      The apple route might be the saving grace

    • @alter_ukko
      @alter_ukko Month ago

      > Is this why they have been troubling over Siri for so long?
      That is my impression, yes. If you think about it, Apple just happening to be less than competent at the technology isn't very convincing. They have the resources and infrastructure to do well. Their value proposition ("your data is more secure with us") means that LLM screw-ups that Meta or Google see as acceptable can't be good enough for them. This is a very difficult technology to tame because of its unpredictability, and once you give it agentive power to do things on your behalf, it becomes downright dangerous.

    • @JediKnight10
      @JediKnight10 Month ago

      Apple is using Nvidia GPUs with loads of AI coded into the chip. Apple is not doing anything on their own as far as the AI goes.

  • @motorcycleboyGG
    @motorcycleboyGG Month ago +7

    Data centers likely won't go out of business. They will be used for Medical, Astronomical, Military, Science, Logistics--large scale industries and disciplines that require far more computational power that most of our needs on local machines. In addition, image VFX and movie/film generation may need more computational power than most local machines would provide.

    • @brianmi40
      @brianmi40 Month ago

      All of that, except for video, will be DWARFED by Agentic AI / OpenClaw type uses. There's no "likely" necessary; just WON'T. Google didn't mistakenly DOUBLE their budget for new data centers this year because they "failed to call Rick and ask him".

  • @rossgoldstein
    @rossgoldstein Month ago +6

    I’ll agree with your assessment when we can download the LLMs on our phone. Most people find it a chore to go to their laptop outside of work hours. But as soon as the phone can handle this locally off its own hard drive, I agree with you.

  • @silvabakx6396
    @silvabakx6396 29 days ago

    Thank you sir! You provide a great service

  • @williemo44
    @williemo44 Month ago +16

    The G3 was the shiznit back in 1999-2000. 0:58

    • @brianmessemer2973
      @brianmessemer2973 Month ago +1

      I forgot all about shiznit!! 😂😂😂

    • @dustbinfilms
      @dustbinfilms Month ago +2

      @brianmessemer2973 you forgot when shiznit was the shiznit???

    • @SatoshisBull
      @SatoshisBull Month ago +2

      Can confirm, it was certainly the aforementioned "shiznit"

    • @parkeranderson7599
      @parkeranderson7599 Month ago

      You can still stream modern RUclips at 720p on a powerpc G5. Early 2000's mac was awesome lol

    • @Spladoinkle
      @Spladoinkle 27 days ago

      G3 was to that era as Apple Silicon is now

  • @NL-mp8mw
    @NL-mp8mw Month ago +10

    Instead of AI music, Rick, give us your opinion on Angine de Poitrine. As a proud quebecer, I love those guys and their "out of this world" music ! (Notwithstanding their musicianship !!!)

  • @BeerStein33
    @BeerStein33 Month ago +10

    Still have my Pioneer Reel-to-Reel.

    • @AceODale
      @AceODale 23 days ago

      Jealous. I wish today that I had kept mine.

  • @SandbarFilmsCA
    @SandbarFilmsCA 23 days ago

    Wow, your best show ever... its something Ive been feeling that there is a disconnect happening... thnx!

  • @thefunbot
    @thefunbot Month ago +37

    wasn't prepared to have you install LM Studio!! hehe

  • @kgconstrictors
    @kgconstrictors Month ago +16

    I see artists like Kalax who have female AI created singers as well as music created with AI. Very sad as we wont have as many human singers etc.

    • @JetCityMatt
      @JetCityMatt Month ago +3

      There are a lot of singers in their 40s. We learned to sing in the 90s.

    • @alanshepherd4304
      @alanshepherd4304 Month ago +4

      Chewing gum for the brain, with no nourishment!! Real music, is like real food.... enjoyable, flavourful, sociable, memorable and so satisfying that you go back for more!!😁🇬🇧🇬🇧

    • @kojoefante
      @kojoefante Month ago +2

      @JetCityMattI don’t you get what they’re saying

  • @trevorhall5664
    @trevorhall5664 Month ago +5

    It's no surprise you are insightful and intelligent about fields outside of music production. Finally a less bleak outlook on AI

  • @GuitarDhyana
    @GuitarDhyana 24 days ago +1

    Rick, you have (accidentally I think) made one of the most subversive, revolutionary videos of the current era. I now realize that our media sort of covered up the full implications of Qwen, DeepSeek, and the like. Yes, the were more energy efficient at the same tasks, but for the reason that these models could run on small computers, decentralized, not so they could squeeze more profit out of data centers. This makes the benefits of AI available to all, with full equity, while eliminating all the negatives of AI: enriching tech bros, environment destroying data cetners, handling our liberty over to a techno-feudalist nation state's full surveillance, and so on. This literally stabs the vampyric heart of these techo bro companies. Kudos. This is deviously great.