Apple's Mac Mini Surpassed by Nvidia's Project Digits for AI?

Поделиться
HTML-код
  • Опубликовано: 22 янв 2025

Комментарии • 71

  • @Seraph137
    @Seraph137 14 дней назад +14

    Now, the best part of this release, is competition for Apple, which really hasn't had any in awhile. Imagine if Apple came out with their own version of Digit, but with a much more friendly UI. And it would be able to fit into the Apple ecosystem.

    • @Ewakaa
      @Ewakaa 13 дней назад +1

      For now I don’t think they can, but if they acquire Intel surely

    • @Seraph137
      @Seraph137 12 дней назад

      @@Ewakaa I agree, I don't think they are in a position to either, my guess is they will partner with OpenAI ChatGPT and integrate it into the OS search and a few of their own apps.

    • @cbarak72
      @cbarak72 10 дней назад +1

      They have mac studio which is also based on ARM and has unified RAM. Plus a more robust OS than NVidia's crap.

    • @waynemorellini2110
      @waynemorellini2110 10 дней назад

      Don't they have a 4m ultra mini coming?

    • @ashishpatel350
      @ashishpatel350 7 дней назад

      apple is just a marketing company that drop ships products.

  • @andikunar7183
    @andikunar7183 14 дней назад +6

    Project digits compute horsepower (and RAM size) looks amazing, but its memory-bandwidth is likely just similar to an M4 Max, because of its 8 RAM chips (512 Bit width) and DDR5X in the illustrations. That‘s less than 1/4 of their 5090, and an upcoming M4 Ultra could be 2x faster. But its unbeatable advantage over Apple is its CUDA + GPU virtualization support. Even if Apple might implement fp4/8, they never seem to think enough about datacenter-technologies which are applicable even to secured desktops and to much just end-user thinking focused. I like my Macs for their price+RAM, but probably will get a project digits box because of this.

  • @energysavingday
    @energysavingday 7 дней назад +3

    Apple have been eeking out enhancements to their over-priced RAM and storage and now find themselves facing an aggressive arms race.

  • @Seraph137
    @Seraph137 14 дней назад +2

    I see having both, a Mac mini and the Digit. I like my Mac for my computer usage, but the idea of having the Digit to run a major LLM that I control and have easy access to, is rather amazing.

  • @amirkhosro428
    @amirkhosro428 13 дней назад +2

    I hope someone can explain this to me and guide me. Project DIGITS offers 1 petaflop of FP4 processing power. Besides training LLM models, can I use it for image processing and deep learning? Most models in this field use FP16, and some require FP32. If not, what options are available within a budget of around $3,000?"

  • @kamertonaudiophileplayer847
    @kamertonaudiophileplayer847 12 дней назад +2

    It's good I didn't buy Mac mini yet. So this small brick is my dream now.

  • @CarlosValero
    @CarlosValero 13 дней назад +1

    Very excited about hearing about the project digits! I have been playing with llms recently and was struggling looking for a decent configuration to run models with 70b parameters. This machine just came out in time. I was thinking about getting several mac minis. No need for then any more!

    • @王子启
      @王子启 8 дней назад

      why not take a Mac Studio instead

  • @everlasts
    @everlasts 15 дней назад +6

    Based on this statement: "Project Digits can run Llama 3.3 70B (fp8) at 8 tok/sec (reading speed)."
    Currently my MacBook Pro M3 Max (128GB ram) can run Llama 3.3 70B (fp8) at 5 tok/sec, I think M4 Max will be a little bit better, so theoretically would be close to Project Digits claims.

    • @dailytekk2
      @dailytekk2  15 дней назад

      Apple chips nothing to sneeze at FOR SURE. That said, being able to run a 400B out of your house is crazy stuff…

    • @AaronFigFront
      @AaronFigFront 14 дней назад

      ⁠@@dailytekk270B is already the limit. 400B at 4 bit, not sure how useful is that.

    • @waynemorellini2110
      @waynemorellini2110 10 дней назад +2

      Exactly, not 20-60 tokens a second, which they should have done. At $3000 that's not good enough. We expect 64GB 5090 cards to come. But what is a 96 or 128GB card comes (the Chinese are already upgrading 4090 cards to 64GB after market). Then such a card in a cheap PC can blow this away at that price. Please nvidia, at least make it 256GB at $2000 or less, with at least 20 token per second 70b 8 bit, then we can take it seriously. 1-2TB at up to 60 tokens per second, you would have all our attention. Don't show us the prototype for a low end valve steam os gaming console for $3000.

    • @gregdee9085
      @gregdee9085 9 дней назад

      ​@@waynemorellini2110u really don't think nvidia knows wouldn't know that?? Ask urself what u might be missing...

    • @waynemorellini2110
      @waynemorellini2110 8 дней назад

      @gregdee9085 They should know, why don't you think they are doing that, what are you missing?

  • @Oishiilicious
    @Oishiilicious 4 дня назад +1

    Imagine Apple coming out with a 128 GB RAM device. How much will that cost? Ain't no way they sell it for 3k.

  • @KarasCyborg
    @KarasCyborg 14 дней назад +1

    So many unanswered questions, how many Cuda cores, how much power will it pull?

  • @saturdaysequalsyouth
    @saturdaysequalsyouth 12 дней назад +1

    It’s not ugly, it’s a mini version of their DGX systems for data centers

  • @a.nobodys.nobody
    @a.nobodys.nobody 2 дня назад

    I'm considering both. Plus a mac studio m4 ultra due early this year. I have a $4k budget, give or take $500 for extra peripherals (like a nice screen or two).

  • @KanielD
    @KanielD 13 дней назад

    It’s exciting to see what’s possible now. There is plenty of new tech on the way to TSMCs A14 node that will push everything forward while driving down costs. Combined with models becoming more efficient, it’s hard to imagine where we’ll be in 5 years.

  • @Maxspert
    @Maxspert 2 дня назад

    I would like to make my personal ofline asistent with this device

  • @vasiovasio
    @vasiovasio 13 дней назад

    Does anyone know the name of the software on the screen of this Nvidia marketing photo?

  • @ashishpatel350
    @ashishpatel350 7 дней назад +1

    this is the first step...hope linux starts to support it.

  • @SoCalSurfer69
    @SoCalSurfer69 12 дней назад +1

    What if you don’t want to do AI, but use it for gaming and video editing or animations

    • @brandonrobinson3829
      @brandonrobinson3829 9 дней назад +1

      Don't waste your money then, that hardware is almost entirely for AI alone which is the only reason I want one.

  • @DanPavelDoghi
    @DanPavelDoghi 15 дней назад +2

    how? when one is 600 usd and the other one is 3000 usd?

    • @dailytekk2
      @dailytekk2  15 дней назад +1

      You can stack several of the top tier (non-entry level) Minis for a local AI stack. By the time you do that you’ve spent more than $3k

    • @TobiasHam
      @TobiasHam 15 дней назад +3

      Mac mini: 4 teraflops. Nvidia Digits: 1000 teraflops

    • @Ewakaa
      @Ewakaa 13 дней назад +1

      I have the 600 bucks and I’m returning it lolz, And I am ready to pay $4,000 for digits. It’s just based on AI preference, if you want something to train or run a model then this is it. People are stacking 4090 gpus just to get a fraction of this power. Data centers having H100 and selling compute, people pay over $500 a month on these, having such a pc dedicated for Ai inference is super amazing and one of the best thing to happen this year. Nvidia just killed Groq and took a huge chunk of Amazon revenue.

    • @Ewakaa
      @Ewakaa 13 дней назад

      I have a Mac book for my everyday use, a tablet and an iPhone lolz, but when it comes to AI compute, I need a desktop that can deliver and I don’t care about ecosystem 😂

  • @niv8880
    @niv8880 8 дней назад

    I believe it may hit Apple's profits or force Apple to do more in AI but these are two different types of machine for two different markets.

  • @Shaunmcdonogh-shaunsurfing
    @Shaunmcdonogh-shaunsurfing 15 дней назад

    What OS would project digits run on? Maybe I missed the whole point though.

    • @dailytekk2
      @dailytekk2  15 дней назад +1

      It’s more about AI than general computing… nobody who wants to buy a Mac Mini “to use as a normal computer” would be interested in the Nvidia for sure. But if you want to run a HUGE local LLM out of your house or office….

    • @AaronFigFront
      @AaronFigFront 14 дней назад

      Linux

    • @Mr.HighTech
      @Mr.HighTech 14 дней назад +1

      I guess it was Nvidias version of Linux

    • @maximilianobregante4751
      @maximilianobregante4751 13 дней назад +1

      I read it's a "nvidia layer" or flavor on top of Ubuntu 22.04 or something like that. You could use it as your workstation.

  • @noelsaw
    @noelsaw 13 дней назад

    In my personal experience, none of the newer local LLM even with large parameters still can’t compete with Chat GPT 4o or o1 for the coding work I’ve done. I am sure that could change in future.

    • @Ewakaa
      @Ewakaa 13 дней назад

      Lolz, try Gemini deep research. And Claude beats 4o. Only o1 is more capable than Claude. Gemini is in the same bracket with o1 and uses less compute. Google have the best engineers in the world and more want to join them. OpenAI has lost a lot of talent. Losing Illa the guy who actually built AlexNet paving the way for back propagation algorithms to be implemented is a big big loss. And trust me OpenAI need to acquire talents to continue being at the top. Sam said they are losing money on o1 pro, only few are going to pay $200/m for an AI model

    • @pikaa-si9ie
      @pikaa-si9ie 12 дней назад

      Can compete or can't compete?

  • @jagkahlon
    @jagkahlon 11 дней назад

    Would take a general purpose system that can also do AI so for the high end user you would still go with mac mini or a studio. Am able to run a Lama 3.1 7B on by mac mini (Intel) with 32 GB mem (non-unified), yes it is relatively slow but works fine for my needs. A M4 mac mini or studio will supercharge it, key is unified memory the more the better.

  • @misury
    @misury День назад

    So, now the question is, will Nvidia use its own supercomputers to improve their new supercomputers?

  • @JoeBrigAI
    @JoeBrigAI 9 дней назад

    Where are the fans?
    Will this push Apple to make 128GB the base for Studio Ultra?

  • @edemkumah5248
    @edemkumah5248 15 дней назад

    You could be very wrong; the average user will be using everyday software, most of which will be AI driven. And the great AI models around have large parameters.

  • @MarkMenardTNY
    @MarkMenardTNY 9 дней назад

    It looks seriously interesting.

  • @CraigMcIntosh
    @CraigMcIntosh 14 дней назад

    I am starting to save to buy one of these, and then buy another, so I can run my own llm at home, for my business use.

  • @johnkost2514
    @johnkost2514 14 дней назад

    Apple silicon has serious competition. Nvidia Thor is their other superchip that makes the GB110 look like a little brother.

  • @radeksparowski7174
    @radeksparowski7174 2 дня назад

    if I understand the idea correctly, the base unit will be a starter kit capable of running a waifu type personal assistent llm and replace a beefy desktop workstation, if it is not enough brute power for your needs, you can scale up simply by adding another module connected via nvlink /or wtvr they call it nowadays/, so you can set up a supercomputer if you have the pocket change by creating a beowulf style cluster and solve immortality or hack the pentagon to find out the truth about ufos or where did the 6 trillion moneyz disappeared that are unaccounted for...

  • @gdotone1
    @gdotone1 10 дней назад

    No! Cost killing it

  • @jasonvaughan5128
    @jasonvaughan5128 13 дней назад +2

    Yeah 10x the price tho. 😂

    • @151mcx
      @151mcx 11 дней назад

      And 10 more power... 250watts... And not really useful beyond AI.

  • @thomasdeleon685
    @thomasdeleon685 14 дней назад

    Hell yes I can't wait

  • @darylcheshire1618
    @darylcheshire1618 14 дней назад

    All I need is a white cat.

  • @gdotone1
    @gdotone1 10 дней назад +2

    This cost how much? 3000. Not selling a bunch

  • @michaellatta
    @michaellatta 13 дней назад

    M4 ultra will match or surpass this at 2x the price.

  • @waynemorellini2110
    @waynemorellini2110 10 дней назад

    High end AI is orders of magnitude more than this. A 70b model is at least 70GB memory at minimal quality accuracy, or double that at quality accuracy, and double that at high quality. Trying to do quality training is double again. That's 960GB of menory, plus processing to process this at a descent rate. But this is not the 405b model, or others that go above 1000b. Nowhere near corporate, and even old ai workstations. $3000. Look at the top of the line Apple small form factors and AMD AI APU's, wben this cones out in May, and compare. You might aswell explore the cheapest to do 70b, or 70b at descent token speed, while you wait for a better machine to come down I price. 512GB+ is what you might buy next. If you want to do the max, then 960GB+. This is quality at the low level.

  • @SuperMachead1
    @SuperMachead1 10 дней назад

    Since it’s running Linux-based Nvidia DGX OS…I don’t think any consumer will buy one….since Apple is a consumer based business….they win…Nvidia might sell a lot of these…but it wont be to consumers…totally different audience

    • @user-pt1kj5uw3b
      @user-pt1kj5uw3b 7 дней назад

      Yeah this is not competition for apple lmao.

  • @victorrobert4600
    @victorrobert4600 8 дней назад

    Apple in full on panic mode right now. If they are not panicking they are the next black berry and Nokia and don't even know it yet. Apple is on a 3-4 year refresh cycle on Mac mini and Nvidia with new chips coming every year will literally rip the face off Apple.
    Looking like game over

  • @MARKXHWANG
    @MARKXHWANG 13 часов назад

    What do u mean under cut, apple ai is a joke. 4090 is 10 wp20 X of mac studio

  • @warmidia8766
    @warmidia8766 11 дней назад

    Why would anyone now; buy a maxed out Mac Book Pro. When Nvidia DIGITS is a thousand times faster. Thank about that; GO NVIDIA. 👍🏾👍🏾👍🏾

  • @MichaelDomer
    @MichaelDomer 4 дня назад

    That Nvidia thing is ugly, but so are Apple computers, that new Apple Mac pro for example is butt ugly.

  • @THEARPE07
    @THEARPE07 5 дней назад

    80% of cool new ai project from Hugging face will not work on mac, bought mac m1 and regret this, macbooks are not ai friendly. Not buying mac again. Had enough.

  • @devonglide1830
    @devonglide1830 15 дней назад

    I think it's butt ugly. That said, I'm thinking of a Mac Studio M4 (whenver it comes out), but at the same time, if something like this can do AI better, I'd consider it. Espeically if it can be ran across the network so that I can set it up away from my visual field. I don't know enough about it at the moment though to really commit, I just know that I burn through my AI credits lately and having something local tuned to what I like to do might be worth while.