RTX 4090 Performance in 3D and AI Workloads

Поделиться
HTML-код
  • Опубликовано: 10 янв 2024
  • I finally got my hands on a 4090, so it's time to out it through the ringer! Like my past videos on M3 Max Performance, I want to see what this card is like in real-world 3D and AI workloads. We'll focus mostly on what it's like to use, and what I think are the key benefits and drawbacks. These guys are still hard to find and definitely not cheap, so let's see if it's worth it!
  • ХоббиХобби

Комментарии • 11

  • @stacymittelstadt4537
    @stacymittelstadt4537 5 месяцев назад +14

    Hi Matt …I am your old neighbor Stacy - just saying hello 👋

  • @Person-hb3dv
    @Person-hb3dv 26 дней назад

    great video. subbed 👍

  • @daReturn1888
    @daReturn1888 6 месяцев назад +2

    Great Video, thank you. Much more useful than all these "reviews" that just list benchmarks. Perhaps add "M3" in the title, this videos should have far more views.

  • @Dominik-K
    @Dominik-K 6 месяцев назад +1

    Amazing video and great points. I love the practical side of using the hardware in AI research

  • @DerekDavis213
    @DerekDavis213 6 месяцев назад +3

    You mention M3 Max with 36gb of unified RAM, and how that gives some advantages over the 4090.
    Well for what the M3 Max costs, you could buy an NVidia A6000 card with 48gb, for about 4500 USD. Definitely worth the money for people that need the performance. And only 300 watts TDP for the A6000 card. Cool and quiet.
    And the Blender results database shows A6000 is 55 percent faster than M3 Max. Wow.

  • @rrrrazmatazzz-zq9zy
    @rrrrazmatazzz-zq9zy 3 месяца назад

    Very instructive, thank you very much

  • @DRAM-68
    @DRAM-68 6 месяцев назад +1

    Very interesting video. I didn’t think the base M3 Max would be that competitive with the 4090 for AI work. I intend to purchase an M3 Ultra when it comes out and hopefully buy the top memory model (256 GB hopefully). That should help with large LLMs. I also intend to get into Blender now that Macs have hardware RT. Your last few videos helped me decide on the Ultra. Thanks for the great info.

  • @martin777xyz
    @martin777xyz 5 месяцев назад +2

    How are you hosting the card? Standalone PC, or as an egpu? Would be interesting to know the little details 🙏

  • @Rednunzio
    @Rednunzio 6 месяцев назад

    What real advantage does starting an LLLM have? Do you use it only as a tool like anyone can do with a web client or do you actively contribute to the development of one of these models?

  • @DaarioNeharis
    @DaarioNeharis 6 месяцев назад

    can rtx 3090 be good enough to run LLMs for inference ?

  • @ZJ.Design
    @ZJ.Design 5 месяцев назад

    Your channel is amazing! Very informational and packed with jams! Subscripbed and liked! Please continue the awesome work!