Apple M3 Max MLX beats RTX4090m

Поделиться
HTML-код
  • Опубликовано: 16 май 2024
  • Try Paperlike here: paperlike.com/alex
    Apple MacBook Pro with the M3 Max chip is even more capable in Machine Learning workflows now that MLX Framework is out. Here I test it against the nVidia RTX 4090 laptop version in one of my typical workflows - speech to text.
    Run Windows on a Mac: prf.hn/click/camref:1100libNI (affiliate)
    Use COUPON: ZISKIND10
    🛒 Gear Links 🛒
    🍏💥 New MacBook Air M1 Deal: amzn.to/3S59ID8
    💻🔄 Refurb MacBook Air M1 Deal: amzn.to/45K1Gmk
    🎧⚡ Great 40Gbps T4 enclosure: amzn.to/3JNwBGW
    🛠️🚀 My nvme ssd: amzn.to/3YLEySo
    📦🎮 My gear: www.amazon.com/shop/alexziskind
    🎥 Related Videos 🎥
    * 🤖 REALITY vs Apple’s Memory Claims | vs RTX4090m - • REALITY vs Apple’s Mem...
    * 👨‍💻 Cheap vs Expensive MacBook for ML | M3 Max - • Cheap vs Expensive Mac...
    * 🤖 INSANE Machine Learning on Neural Engine - • INSANE Machine Learnin...
    * 👨‍💻 M1 DESTROYS a RTX card for ML - • When M1 DESTROYS a RTX...
    * 🌗 RAM torture test on Mac - • TRUTH about RAM vs SSD...
    * 👨‍💻 M1 Max VS RTX3070 - • M1 Max VS RTX3070 (Ten...
    🛠️Code🛠️
    github.com/TristanBilot/mlx-b...
    github.com/ggerganov/whisper.cpp
    - - - - - - - - -
    ❤️ SUBSCRIBE TO MY RUclips CHANNEL 📺
    Click here to subscribe: www.youtube.com/@azisk?sub_co...
    - - - - - - - - -
    📱LET'S CONNECT ON SOCIAL MEDIA
    ALEX ON TWITTER: / digitalix
    - - - - - - - - -
    #m3max #m2max #machinelearning
  • НаукаНаука

Комментарии • 160

  • @AZisk
    @AZisk  Месяц назад +1

    JOIN: youtube.com/@azisk/join

  • @johnsuckher3037
    @johnsuckher3037 3 месяца назад +144

    yeah but can it run crysis

    • @adrimi5
      @adrimi5 3 месяца назад +16

      yeah but can crysis feed family? idk

    • @johnsuckher3037
      @johnsuckher3037 3 месяца назад +38

      @@adrimi5 family goes on diet after each new pro device gets released

    • @Epicgamer_Mac
      @Epicgamer_Mac 3 месяца назад +3

      Indeed it can, if you know the right toolkits to download and terminal commands to run

    • @andrew.nicholson
      @andrew.nicholson 3 месяца назад +3

      Yep! Using Crossover.

    • @DaveDFX
      @DaveDFX 3 месяца назад

      I’m running GOG Cyberpunk 2077 on my M3 Max using crossover.

  • @collinpurcell986
    @collinpurcell986 3 месяца назад +27

    Awesome video! I would love to see more LLM or other DL architecures benchmarked between the M3 Max and the RTX 4090m laptop. A definitive video saying the M3 Max is X% better/worse than the 4090m for RNN, CNN, or transformer architecutres would be a gold mine for other AI/ML devs like me!

  • @roccellarocks
    @roccellarocks 3 месяца назад +41

    Watched tens of your videos before upgrading from my old i9 MacBook Pro to my M3 Max MacBook Pro.
    Nowadays I still watch your videos (even if I already have an M3 MacBook) because I like the way you make your content - pragmatism, tone of voice, length and cuts.
    👏

    • @tybaltmercutio
      @tybaltmercutio 3 месяца назад +1

      How much RAM did you get? I cannot decide between 36 GB, 48 GB or maybe even 64 GB (for future proofing).

    • @marc-andrevoyer9973
      @marc-andrevoyer9973 3 месяца назад +1

      @@tybaltmercutio same situation as OP, went with 14" and 64gb ram

    • @Physbook
      @Physbook 3 месяца назад

      i just keep my i9 macbook pro alongside with alienware rtx4090

  • @DimensionFlux
    @DimensionFlux 3 месяца назад +3

    Great to see more MLX content. Please do a comparison with Stable Diffusion MLX vs PC!

  • @randysavage7351
    @randysavage7351 2 месяца назад +3

    Found your channel from Fireship vid ~2ya. Awesome stuff!

  • @saintsscholars8231
    @saintsscholars8231 3 месяца назад

    How would a Mac Studio M2 32GB stack up vs the MBP M3?

  • @Zhiyuai
    @Zhiyuai 2 месяца назад

    to fine tune llama on m3 max, what size Llama work?how fast?can you release a video for this topic?

  • @GabrielThaArchAngel
    @GabrielThaArchAngel 3 месяца назад

    Hey Alex, I was wondering if you have a video planned for your EDC as a software engineer. I’ve been looking for a light case that I can carry around for my 16 MacBook with the 12.9 iPad. Trying to get ideas of what you utilize

  • @mr_ww
    @mr_ww 3 месяца назад +1

    Thank you! What is the correct way of comparing my current AMD Radeon Pro 5300M 4 GB (MacBook Pro 2019) to an Apple M silicons? In terms of a MacBook gaming experience. I am playing a game from time to time and would like to make sure that a M chip won't take it away from me :)

  • @Dadgrammer
    @Dadgrammer 3 месяца назад +6

    Hmm this difference mayo is from ram/vram sharing on arm Macs.
    ARM GPU can use up to 75% of ram as vram. I don’t know that you’ve 64/96/128 RAM versions, but in all cases will be more vram than 20gb in 4090.

  • @RahulPrajapati-jg4dg
    @RahulPrajapati-jg4dg 3 месяца назад

    Hi can you suggestion which laptop best for LLM + Deep learning I did want to any pc can you please help me

  • @Fledermaus-20
    @Fledermaus-20 3 месяца назад +2

    Very nice Video, but can you try Faster Whisper for python on your the devices?

  • @asjsjsienxjsks673
    @asjsjsienxjsks673 3 месяца назад +43

    No it’s not faster. You’re not using fast whisper. Also python implementation absolutely uses the gpu. Set device to mps

    • @parmeshwarmathpati2916
      @parmeshwarmathpati2916 3 месяца назад +1

      Yes can we discuss for setting hardware for building llm

    • @stephanemignot100
      @stephanemignot100 3 месяца назад +2

      Try that unplugged...

    • @asjsjsienxjsks673
      @asjsjsienxjsks673 3 месяца назад +13

      @@stephanemignot100 of course man, if you plug it in it’s faster. If you leave it unplugged it slower I’m not debating the fact that the M3 Max is a wonderful chip. All I’m saying is that even the Nvidia 4090 at its peak capability is faster if you want to say that the battery is worse, absolutely not denying that but the M3 Max GPU is not faster than the 40,90

    • @RunForPeace-hk1cu
      @RunForPeace-hk1cu 3 месяца назад

      @@asjsjsienxjsks6734090 doesn’t have 19GB VRAM 😂

    • @asjsjsienxjsks673
      @asjsjsienxjsks673 3 месяца назад +3

      @@RunForPeace-hk1cu where did I say that?

  • @weeee733
    @weeee733 3 месяца назад

    Is there anyway to run mlx inside xcode ios project?

  • @skyhawk21
    @skyhawk21 3 дня назад +1

    Can you make iPad and iPhone app versions of these tests so we can benchmark m4 on iPad in couple of days?

  • @user-ol3tf1qi6c
    @user-ol3tf1qi6c 3 месяца назад +13

    WSL & even Windows itself has a lot of overhead. If you wanted a more "Apples to Apples" comparison, you should've compared it with the 4090 laptop running something like Clear Linux or Ubuntu. It likely would've not closed the gap but the results would be a lot better.

    • @kja6336
      @kja6336 2 месяца назад +2

      It does close the gap, it actually easily outperforms M3 with a completely flatlined system, it’s just Apple has a nicer interior than most offbrand and microsoft computers. A maxed Lenovo for example outperforms a maxed M3 on UE5.

  • @ToySeeker
    @ToySeeker 3 месяца назад +3

    Hi Alex! ❤ love ya my guy 😊your videos are incredible! Can’t wait to fork 🍴

  • @anirudha366
    @anirudha366 3 месяца назад

    Can you make a video on how to install llama using ml?

  • @parmeshwarmathpati2916
    @parmeshwarmathpati2916 3 месяца назад

    Hi alex can i get your mentorship session i m ready to pay for hardware setup for building llm

  • @geog8964
    @geog8964 3 месяца назад

    Thanks.

  • @cogidigm
    @cogidigm 3 месяца назад

    Could you pls make a video on stable diffusion ComfyUI on Mac, I don’t know why nobody ever made any videos about it

  • @Alexis_Noukan
    @Alexis_Noukan День назад +2

    In French bilot sounds like “be low”

  • @SmirkInvestigator
    @SmirkInvestigator 3 месяца назад

    Anybody have a roadmap for me to learn on what about a language or framework performs better on one arch or another. How clever can tensor operations get? Python I get. But what is it b/w mlx, cpp and ggml, jax and mojo?

  • @markclayton8977
    @markclayton8977 3 месяца назад +5

    Alex, I found your channel when researching for my M3 max laptop purchase. I love your benchmark methodology, but also wish I could copy some of your workflows. If you added a code repository to your membership, I would join!

    • @AZisk
      @AZisk  3 месяца назад +6

      As much as I'd like you to join, there is no need to join to see my repos. This is a "better late than never" repo of my tests which I recently started: github.com/alexziskind1/machine_tests

  • @Itcornerbg
    @Itcornerbg 3 месяца назад +4

    Hey, amaizing video very useful, 5:18 - i am interesting to see the video how to install whisper with support of GPU etc.

    • @AZisk
      @AZisk  3 месяца назад +1

      Coming soon!

    • @Itcornerbg
      @Itcornerbg 3 месяца назад

      @@AZisk - i already testing with Nvidia P40, but its was interesting to see your results

  • @hevesizeteny4046
    @hevesizeteny4046 Месяц назад

    Hello guys! I might sound weird but how can I look at the subscriptions?:D

  • @stephensiemonsma
    @stephensiemonsma 3 месяца назад +5

    Wow, exciting results! I was always optimistic that Apple's unified memory architecture would pay dividends in certain workloads, and MLX appears to be effectively exploiting that paradigm shift.
    Keep up the good work! Love the channel!

  • @mannkeithc
    @mannkeithc 3 месяца назад

    My apologies if I am being dumb, by why wouldn't you use an NPU for this machine learning process, as I thought this is the sought of task NPUs were designed for, and maybe even better at than a GPU? And if you could, how would the performance compare when running on an Apple Silicon NPU (on paper M3 NPU is 18 TOPS for FP16)? And as every processor manufacturer is now getting on the AI bandwagon, you could even extend it to compare the performance of AMD 7000 series with AI NPU (10 TOPS, 8000 series NPUs 16 TOPS) or Intel's Meteor Lake core Ultra with NPU (10 TOPS)? Of course, the processor I would really like to see would be Qualcomm's Snapdragon X Elite with its 45 TOPS NPU, but that's yet to be released.

  • @stephenthumb2912
    @stephenthumb2912 3 месяца назад

    Have not had good luck running ai workloads on wsl or wsl2 with a discrete gpu. Everything says my gpu is being used incl docs but performance is pathetic.

  • @aravjain
    @aravjain 3 месяца назад +1

    Great video, Alex! You have some really enjoyable content on your channel.
    Are you able to send me one of your old M-series Macs; I’m a student and I’m trying to learn some ML/AI stuff.

  • @chrisa5304
    @chrisa5304 3 месяца назад +1

    Want to watch the stable diffusion one. Want to meet up? I'm in DMV

  • @johnkost2514
    @johnkost2514 2 месяца назад +4

    RTX 4090m is equivalent to the desktop RTX 3080 btw.

    • @hugoramallo1980
      @hugoramallo1980 Месяц назад +2

      NOPE. RTX 4090 mobile = 3090Ti desktop = 4070TI . 40 tflops all.

  • @dqieu
    @dqieu 3 месяца назад

    Have you tried timing all the machines with the model already loaded in the GPU's ram to test the raw compute power? It would also be a fairer comparison with cloud-hosted solutions. Anyways, wild that Apple hasn't sent anything to the only ML/AI reviewer on RUclips. AI/ML is the core reason for me to update from M1/2 to M3 Max.

    • @Slav4o911
      @Slav4o911 3 месяца назад

      4090 should be faster as long as the model fits in the VRAM... if the model goes outside... it will be slower.

  • @Mabeylater293
    @Mabeylater293 2 месяца назад +5

    Serious question: why would anyone buy a windows pc when you can buy a Mac that not only can run windows on it but runs windows BETTER than a windows pc??? In buying a computer soon and would appreciate the feedback. Thanks.

    • @olepigeon
      @olepigeon 2 месяца назад +2

      If power usage isn't your concern, then a PC can and will be faster. 4th Gen Core i9 + RTX 4090 will likely dominate in all benchmarks. For truly mobile performance (as in on battery, not plugged into a wall), Apple undeniably has the best product on the market right now. So long as you don't want to play any games on it.

    • @ishiddddd4783
      @ishiddddd4783 Месяц назад

      for mobile platforms apple makes sense, for in house usage, it still lags behind by a lot, unless you are already deep into the apple ecosystem or simply prefer it, for pretty much every benchmark the only metric apple is going to win is in power usage which matters a lot in laptops, in desktop, not so much when while using more power, will get the job done far quicker.

  • @devluz
    @devluz 3 месяца назад

    very interesting video ... but why do you have so many laptops lying around? :o

    • @AZisk
      @AZisk  3 месяца назад

      for testing

  • @chandanankush
    @chandanankush 3 месяца назад

    My takeway is some fancy tech words to explore next week 😢

  • @rekad8181
    @rekad8181 3 месяца назад +2

    I hope all of these were plugged in and not on battery. Also on the win laptop please go to power plan and make sure the gpu is maxed out

    • @MrFhelix17
      @MrFhelix17 3 месяца назад +1

      Its a fucking laptop and we dont usually using charger outside… with that huge charger waste of space in the bag

    • @motherofallemails
      @motherofallemails 3 месяца назад +3

      @@MrFhelix17 excuse me? Without the PSU the test is pretty much IRRELEVANT, I can't believe I'm reading such a silly comment, what a pointless video then! Those GTX laptops power right down when running on battery.
      Can't believe this how pointless can people be!

    • @ClearGalaxies
      @ClearGalaxies 3 месяца назад

      ​@@motherofallemails It has a BATTERY 😮😮🔋🙀🤯😱 (this is ragebait. Please get mad)

    • @motherofallemails
      @motherofallemails 3 месяца назад +3

      @@ClearGalaxies so has my laptop, the rtx goes into super low power mode when running on battery, otherwise it would drain the battery in no time at 160W, you can't do anything practical off the batteries! the fact that this test was run off battery power makes this channel a joke, sorry.
      In fact I'm a bit annoyed at having wasted my time. I'm OUT. 🤬

    • @ClearGalaxies
      @ClearGalaxies 3 месяца назад

      @@motherofallemails I was trolling. I know 💙

  • @Anshulb04
    @Anshulb04 3 месяца назад +1

    7:23 Vision Pro Light Seal Cushion spotted 👀

    • @AZisk
      @AZisk  3 месяца назад +3

      you got me. i still have mine

  • @crearg8259
    @crearg8259 3 месяца назад +2

    Wait what! Last week or two when I checked, Whisper still didn’t support Metal!

    • @asjsjsienxjsks673
      @asjsjsienxjsks673 3 месяца назад

      Been using whisper metal via python and whisper.cpp for months now

  • @PhantomEverythingSaif
    @PhantomEverythingSaif 4 дня назад +1

    Bro litterlay had a dozen macs!

  • @softwareengineeringwithkoushik
    @softwareengineeringwithkoushik 3 месяца назад +2

    Hi, Alex How are you.. ?

    • @AZisk
      @AZisk  3 месяца назад +1

      yo!

  • @Buqammaz
    @Buqammaz 3 месяца назад

    We want more content about MLX

  • @MrLocsei
    @MrLocsei 3 месяца назад +3

    "PC Master Race" on suicide watch !! 😂
    (and yes, it's quite probably the M-series chips' Unified Memory architecture that's making the difference here)

    • @D0x1511af
      @D0x1511af 3 месяца назад +3

      lolz....the limitation here is PCIe bottleneck...not Nvidia GPU.... if NVLink protocol running on PC it's will destroy day and night M3 max

    • @gytispranskunas4984
      @gytispranskunas4984 3 месяца назад +5

      ?... Lol are you aware that Nvidia is in the making of ARM SOC themselves. You know what that means... Dont you ?... I hate Nvidia pricing. But I know one thing. Thease guys dont play when it comes to performance. Every one knows that when Nvidia releases ARM based SOC in upcoming years... Its gonna destroy everything on the market. Like it always does. Also... This laptop does NOT have RTX 4090. Not even close...

    • @AZisk
      @AZisk  3 месяца назад +4

      if nvidia starts making the entire SoC, they might beat apple, but they are doing too well in just discrete gpus to try that

    • @sas408
      @sas408 3 месяца назад

      @@gytispranskunas4984 why do you hate nvidia pricing? They cost same as AMD but providing RT cores, cuda and they are more stable. Quality and R&D costs money too

    • @ClearGalaxies
      @ClearGalaxies 3 месяца назад

      PC users huffing copium in the comments section 😂

  • @hariharan.c8009
    @hariharan.c8009 2 месяца назад

    hi lenovo loq i5 12450h 8gb 4060 80k vs ideapad ryzen 7 5800h 6gb 3060 71k purpose machine learning college purpose

  • @user-mp9zn6zi7z
    @user-mp9zn6zi7z 3 месяца назад

    You forget something, when you tried to make a benchmark you faced the same issue, you couldn't use the whole performance of the GPU/CPU when you used Windows or WSL, and you achieved that when moved to Linux. please do it and tell me the results.
    I love your videos.

  • @PratimGhosh1986
    @PratimGhosh1986 3 месяца назад +3

    WSL uses hyperv, there is no way around it.
    MSI laptops are always noisy. If you need a powerful and less noisy windows laptop then Lenovo Legion 9i is a better choice

    • @AZisk
      @AZisk  3 месяца назад

      Haven't tried that one yet. Thanks

  • @Buqammaz
    @Buqammaz 3 месяца назад

    Finally MLX 🔥

  • @burtdanams4426
    @burtdanams4426 3 месяца назад

    Part of Apple's long game here is to absolutely dominate the mobile market in every way, and part of that domination is going to require robust machine learning capabilities and speed even for small models that are better suited for mobile uses of machine learning applications. They make their machines able to run small models insanely fast and that's where they're going to have a huge edge in the future

  • @darshank8748
    @darshank8748 3 месяца назад +2

    Google has a better transcriber in their API Vertex called USM tbh

    • @KarlynGR
      @KarlynGR 3 месяца назад +3

      Then why is the RUclips one still trash?

  • @rupertchappelle5303
    @rupertchappelle5303 2 месяца назад +1

    Two MacBook Pros died after 14 months. If I could buy. a new one every year, that would be just GREAT.
    8GB of RAM is not enough but Apple figures that profits are better than selling a computer with enough memory to do the job. "Job" - does that remind you of someone??? Too bad we are Cooked.

  • @Alex82727
    @Alex82727 3 месяца назад +3

    I want a MacBook that has Apple silicon soooooo badddd 😭😭😭😭😭

    • @markclayton8977
      @markclayton8977 3 месяца назад +4

      What’s your use case? The battery life on even the M1/M2 chips is phenomenal, the M3 chip mostly just adds performance. If you’re using it for light tasks, save some $$$ and get an M1 or M2 series chip

    • @Alex82727
      @Alex82727 3 месяца назад

      @@markclayton8977 I’m a photographer I use adobe PS adobe Lr and LRC plus Xcode for my camera app I’m working on and I need to connect to two displays

    • @ClearGalaxies
      @ClearGalaxies 3 месяца назад

      🥵

  • @Mostafaabobakr7
    @Mostafaabobakr7 3 месяца назад

    Red eyes! Check if this is normal

  • @saidd.
    @saidd. 3 месяца назад

    I have no idea why the hack I am watching this now, but everything you say sounds cool. :))
    Ps: no idea how to code at all, wish I could.

  • @AndysTV
    @AndysTV 3 месяца назад

    Insanely fast model is actually way faster in 4090

  • @netonCyber
    @netonCyber Месяц назад

    Doesn't matter if there's like zero software to use on silicon, it just is thar devs always do windows, only billionaire devs support mac, or browser game devs

  • @nasirusanigaladima
    @nasirusanigaladima 3 месяца назад +1

    First again from X to youtube.
    Eveyday i get more impressed with the apple chips and unified memory 😊

  • @RSV9
    @RSV9 Месяц назад

    But … 8 GB on MacOS is like 16 GB on Windows 🤔

  • @stendall
    @stendall 3 месяца назад +1

    Soooo, the real title of this video should be MLX extremely poorly optimized for CUDA cores.

    • @yvesvandenbroek6055
      @yvesvandenbroek6055 3 месяца назад

      MLX does not run on PC’s and there are no CUDA cores on Apple Silicon 🤷‍♂

  • @einstien2409
    @einstien2409 Месяц назад

    Use a simple RTX 4060 laptop without power plugged in.

  • @yesyes-om1po
    @yesyes-om1po 20 дней назад

    too bad the proprietary silicon is anchored to the pos company which is apple, I don't want to spend 800 dollars on an extra 64gb of memory.

  • @kashalethebear
    @kashalethebear 29 дней назад

    Whisper isn't AI.. no true AI yet exists lol

  • @divyanshbhutra5071
    @divyanshbhutra5071 3 месяца назад +4

    Nvidia seriously needs to up the game with VRAM capacity. But why would they, when their competitors are as useless as Intel and AMD.

    • @utkarsh1874
      @utkarsh1874 3 месяца назад

      or apple

    • @divyanshbhutra5071
      @divyanshbhutra5071 3 месяца назад

      h1874 Apple chips have a lot of memory

    • @PSYCHOV3N0M
      @PSYCHOV3N0M 3 месяца назад

      ​@@divyanshbhutra5071Nvidia is working on ARM.
      They'll release something more powerful (even without tight optimization) than what Apple can ever hope to achieve.

    • @RunForPeace-hk1cu
      @RunForPeace-hk1cu 3 месяца назад

      And kill off the h100 market? 😂😂😂😂😂
      You’re so naive

    • @RunForPeace-hk1cu
      @RunForPeace-hk1cu 3 месяца назад

      @@utkarsh1874m2ultra has 192gb memory 😂😂😂😂 what are u on about?

  • @ryshask
    @ryshask 3 месяца назад +1

    Python has contributed more to carbon emissions than any other programming language.

    • @AZisk
      @AZisk  3 месяца назад

      lol

    • @TheDanEdwards
      @TheDanEdwards 3 месяца назад +1

      So many tech-bros on the net bragging about their AI on 4090's using Python, AS IF using Python is something about which to brag (when it comes to performance or efficiency.)

    • @PSYCHOV3N0M
      @PSYCHOV3N0M 3 месяца назад

      ​@@TheDanEdwardsWhich programming language would you say is the best??

  • @tambourinedmb
    @tambourinedmb 3 месяца назад +1

    8GB is like 16GB

    • @TheDanEdwards
      @TheDanEdwards 3 месяца назад

      Anyone interested in LLM will have the knowledge or experience to buy the right machine for their use. Almost no base config Mac buyer is going to really care about playing with LLM code.

    • @TamasKiss-yk4st
      @TamasKiss-yk4st 3 месяца назад

      But that is reflected to the machine/OS itself, and the GPU VRAM can't even run the whole OS.. actually can't even reach any data from the System RAM.. you need to copy the data from System RAM to the GPU RAM to let the GPU to use it.. so this 2 different things what you mixing together.. can the 16GB RTX 4090 run a full benchmark? (as it run the operation system too, not just part of the benchmark...)

  • @johnbreaker3874
    @johnbreaker3874 3 месяца назад

    with all of that machine, you should make GA xD as i need your m3 max moahahaha

  • @andrewdunbar828
    @andrewdunbar828 2 месяца назад

    French "l" sounds like "l". If it were double "ll" it would've sounded like "y".

    • @AZisk
      @AZisk  2 месяца назад +1

      darn. should have asked my wife before vid.

  • @InsideGreatness-gh8wc
    @InsideGreatness-gh8wc 2 месяца назад +1

    Hide your kids, hide your wife

  • @netonCyber
    @netonCyber Месяц назад +1

    Actually, if this is true then you didn't pick the best machine for competition cuz there are bazillion non apple laptops, the mathematical consequence is that one of them has to beat the mac, so clickbaiting us with this title is awful

    • @Intel101-pe1et
      @Intel101-pe1et 3 дня назад

      What kind of mathematics is that ?

    • @netonCyber
      @netonCyber 3 дня назад +1

      @@Intel101-pe1et statistics bro, plus probability

  • @Ricardofox12
    @Ricardofox12 3 месяца назад

    And apple dares to say 8GB are enough

    • @AZisk
      @AZisk  3 месяца назад +2

      not for ml. nobody said 8gb is enough for ml.

    • @CrYou575
      @CrYou575 3 месяца назад +1

      Microsoft said 640kB was enough.

  • @user-sam4465
    @user-sam4465 3 месяца назад

    But with windows laptops, you will spend only a few dollars on upgrading ram, but for apple you'll spend much more.

    • @lesleyhaan116
      @lesleyhaan116 2 месяца назад

      and you are stuck to a wall outlet

    • @jasonwun6113
      @jasonwun6113 2 месяца назад

      Well, you need to carefully specify the use case of the ram. In AI world, the only ram matters is the one on graphic card and it is not relatively cheaper to upgrade compare to mac

  • @MrDovman
    @MrDovman 3 месяца назад

    What is the purpose of this computing power? Do you need it every moment of your day? And if you don't have it, is it a serious issue? I have a Mac Mini M2 at home. I also have 2 Windows PCs. I have no affection for these two machines that heat up, blow, scream, make a loud noise to obtain the power you're talking about. Not to mention the poor quality of plastics that crack and the miserable battery life of the laptop (whose power supply is larger and heavier than my Mac Mini M2). The production of PCs should be stopped.

  • @ClearGalaxies
    @ClearGalaxies 3 месяца назад

    Apple beats the competition. As usual 🥱 #PCMasterRace? More like #PCObselete 😂 /j

    • @crestofhonor2349
      @crestofhonor2349 3 месяца назад

      PCs are still better in multiple ways that Macs aren’t. Far from irrelevant

    • @ClearGalaxies
      @ClearGalaxies 3 месяца назад

      @@crestofhonor2349 you're right. I was just trolling 💚

  • @Hunter_Bidens_Crackpipe_
    @Hunter_Bidens_Crackpipe_ 3 месяца назад +1

    4090m is FAR superior

  • @vikasz2
    @vikasz2 3 месяца назад

    Can I have your cheapest mac air m1 please? 😍