Crazy Efficient: AMD Threadripper 7980X & 7970X CPU Review & Benchmarks

Поделиться
HTML-код
  • Опубликовано: 27 сен 2024

Комментарии • 1,2 тыс.

  • @GamersNexus
    @GamersNexus  10 месяцев назад +211

    Watch our livestream overclocking the AMD Threadripper 7995WX CPU! Crazy educational too as the engineers who joined us shared a lot of detail: ruclips.net/video/vU179_czCnU/видео.html
    Error/correction - There is a typo in some charts where the 3960X's name is next to an erroneous thread count. This does not affect testing or results. Our apologies for the error. Correct count is 24C/48T.
    Watch our coverage of the AMD Threadripper TRX50 & WRX90 motherboards here: ruclips.net/video/NTnVBIEPz1w/видео.html
    Find the AMD Threadripper CPU specs here: gamersnexus.net/news/new-amd-threadripper-7980x-7970x-7960x-threadripper-pro-cpus-announced
    Support our testing and grab a solder & project mat, modmat, or toolkit on the GN store! store.gamersnexus.net/ (currently 10% off at time of posting!)

    • @johnnypopstar
      @johnnypopstar 10 месяцев назад +4

      Fingers crossed from across the pond that "later today" doesn't mean _too_ much later, because this is definitely something I want to catch

    • @jorgeaugusto6076
      @jorgeaugusto6076 10 месяцев назад +1

      This gonna be fun!

    • @1carusGG
      @1carusGG 10 месяцев назад +2

      DO you think that Adobe still gives preferential treatment to Intel CPU's over AMD?

    • @spg3331
      @spg3331 10 месяцев назад

      YES!!!

    • @islandfireballkill
      @islandfireballkill 10 месяцев назад

      What kind of nonsense statement is time tbd. At least give a range?

  • @andyastrand
    @andyastrand 10 месяцев назад +1507

    I wonder if Adobe will ever write something that threads efficiently

    • @GamersNexus
      @GamersNexus  10 месяцев назад +457

      That'd certainly be nice for us. Resolve apparently does well here, though we haven't used it yet!

    • @sirmonkey1985
      @sirmonkey1985 10 месяцев назад +61

      nah, they pretty much moved their focus to hardware acceleration w/ quick sync support 3 or 4 years ago.

    • @xamanto
      @xamanto 10 месяцев назад +69

      lol. lmao, even.

    • @Splarkszter
      @Splarkszter 10 месяцев назад

      Adobe sucks anyway, i don't know why people keep using that overpriced crap.

    • @madant7777
      @madant7777 10 месяцев назад +65

      Maybe when Intel stops paying them...

  • @literallyme5092
    @literallyme5092 10 месяцев назад +426

    Finally, some new gaming benchmarks

    • @AyoKeito
      @AyoKeito 10 месяцев назад +2

      I seriously wish it would be closer to 7950 in gaming, rather than my current 5950. If that was the case, i'd change platform simply to install more GPUs for other tasks while i'd still be able to game at pretty much best performance. But unfortunately current results show it's still more sensible to run a different PC for more GPUs. It's a shame, i was ready to shill out a stack of cash! 🥲

    • @Splarkszter
      @Splarkszter 10 месяцев назад +2

      ​@@AyoKeito If you just increase the graphics you don't even need to bother about that, and remember the why it runs like that is because of the frecuency.
      Anyway, it's waaay cheaper to just have multiple Desktop systems if you aren't a server company struggling to fit more computers on the same building.

    • @fleurdewin7958
      @fleurdewin7958 10 месяцев назад +10

      Threadripper has always suffer from higher memory latency than the consumer range Ryzen. Even in Zen 2 Threadripper, they are always +8ns memory latency from regular Zen 2 Ryzen with the same memory timings in AIDA64. Games love low latency memory. And mandatory use of Registered DIMM on TR 7000 series hurts memory latency even more.

    • @AceStrife
      @AceStrife 10 месяцев назад +1

      @@Splarkszter "If you just increase the graphics"
      Sorry, this is a fallacy. GPU performance will always top out at something; you can't possibly increase it any further. And if you do (ie; 8k downsampling), you're just dragging the overall performance down from something it could've been instead (ie 60 fps vs 120fps). This is even more important in emulation, which is extremely CPU dependent for good performance.
      As someone with a 4090 and 5950x (waiting for 8000x3d), I am constantly CPU limited while trying to reach 120Hz, irrespective of graphics options. And RT just drops it down even further because it still depends a lot on the CPU.
      If all you have is a 15 year old 60Hz monitor, then sure, that's a different discussion. But anyone playing modern PC games in an era of 500Hz monitors is probably expecting 120Hz+, and both the GPU and CPU need to be strong for this to happen.
      All of GN's CPU benchmark graphs show this issue very clearly. Wish they'd run some emulation tests though; no outlet I know of does this.

    • @jogeem5480
      @jogeem5480 10 месяцев назад

      ​@@AceStrifeIt's funny how 120+ is very little to ask from some games while for others it's near impossible to reach with reasonable settings

  • @lonelyone69
    @lonelyone69 10 месяцев назад +964

    AMD's focus on efficiency has been insane when you think about how they're punching chips with 2,3,4 times the power draw.

    • @xXFlameHaze92Xx
      @xXFlameHaze92Xx 10 месяцев назад +7

      and thats why nobody in server market seeks Epyc as an Option.
      Because what matter efficiency when you need to pay also penalty fees for power overconsuption on at least 20 states

    • @PanduPoluan
      @PanduPoluan 10 месяцев назад +406

      ​@@xXFlameHaze92XxYou don't make any sense. Intel Xeons consume even more power, so companies choosing them will be penalized even more.

    • @lonelyone69
      @lonelyone69 10 месяцев назад +391

      @@xXFlameHaze92Xx EPYC has literally been AMDs largest commercial success outside of their playstation deal...... Server share has went up 10% in the last 3 years alone....

    • @garrettkajmowicz
      @garrettkajmowicz 10 месяцев назад +113

      Not too long ago it was Intel that was more efficient. Somehow AMD has managed to become the king of everything in the CPU world.

    • @marcogenovesi8570
      @marcogenovesi8570 10 месяцев назад +192

      @@xXFlameHaze92Xx sorry what? do you know what efficiency means

  • @brandoncoventry5662
    @brandoncoventry5662 10 месяцев назад +433

    I'm in the world of molecular dynamics and I regret to inform you that NAMD is unfortunately pronounced "NAM-Dee". Major props for running these simulations, they are not easy to setup. Extremely helpful to me as I plan out lab computers.

    • @GamersNexus
      @GamersNexus  10 месяцев назад +182

      I couldn't keep saying "N-A-M-D" and had to put my foot down! haha, thanks for the info, will use for next time. Are those tests useful in figuring things out for your field? It'll help us determine if we keep running them!

    • @brandoncoventry5662
      @brandoncoventry5662 10 месяцев назад +153

      @@GamersNexus Totally! Took me about 2 years to finally figure out the "correct" pronunciation. And absolutely! These are ridiculously helpful as documentation and testing of these programs on different hardware is at best limited and mostly nonexistent. You all are the only ones putting up bench marks with these tools, and for that I am extremely grateful!

    • @Alklaine
      @Alklaine 10 месяцев назад +47

      ​@@brandoncoventry5662Do you know how these metrics relate to real-world benefit to you/your field, since you said they are hard to come by? Like is it a 1:1 if its 30% better in a benchmark you can expect somewhere around 30% gains in real world application? Interested to know how much one metric is more important than another!

    • @radutazu
      @radutazu 10 месяцев назад +40

      ​@@GamersNexusLots of thanks for including these tests. I also work daily with molecular simulation software and can vouch to these tests being super useful for planning upgrades to workstations and compute clusters.
      Would it be possible to have some similar tests in the future for GPUs as well? Consumer grade GPUs perform extremely well in software such as GROMACS, but finding benchmarks for it is extremely difficult.

    • @eric.is.online
      @eric.is.online 10 месяцев назад +10

      @@radutazu oh god yes, this would be amazing. I run AMBER mostly in my day job but I'll take any GPU benchmarks for MD.

  • @WayStedYou
    @WayStedYou 10 месяцев назад +224

    You know it's serious when Steve whips out the whiteboard.

    • @lldjslim
      @lldjslim 10 месяцев назад +2

      Who is going to be the first lollipop 🍭 sucker that buys a threadripper cpu just for gaming only?

    • @justahologram2230
      @justahologram2230 10 месяцев назад +4

      ​@@lldjslimthat might be jaystwocents plan for the next iteration of skunkworks

    • @desertfish74
      @desertfish74 10 месяцев назад

      Thanks Steve!

    • @scarletspidernz
      @scarletspidernz 10 месяцев назад +1

      @@justahologram2230 nah Phil might get it so that video editing/rendering time goes down

    • @equinoxe3d
      @equinoxe3d 10 месяцев назад

      @@justahologram2230 Nope, he said the horrible gaming performance discarded it for Skunkworks, especially since it's his daily driver at home for gaming/streaming

  • @POLARTTYRTM
    @POLARTTYRTM 10 месяцев назад +148

    I really loved the part "we don't know what the numbers mean but they're here". Honesty.

    • @GamersNexus
      @GamersNexus  10 месяцев назад +47

      In the very least, we know how to run a test even if the software is foreign to us. Hoping the community lets us know which of those are useful so we can incorporate them fully!

    • @CapaNoisyCapa
      @CapaNoisyCapa 10 месяцев назад

      I'm part of the fantasy world of Finance, and I sincerely hope the Science guys know what they're doing. We definitely do not.
      The FSI bench is measuring HFT in equity markets, particularly derivatives, I suppose. HFT strategy and algorithmic implementation must be unique for each operator for it to work. I'm not even remotely qualified to give a ten minutes power-point presentation about it, but I don't see any practical reasons for this test, at least in theory. I'm probably missing something obvious here, so please feel free to correct me.

    • @niklasnelimarkka2993
      @niklasnelimarkka2993 10 месяцев назад

      The numbers Mason, what do they mean!

    • @greebj
      @greebj 10 месяцев назад +1

      Being able to turn something you know nothing about into KPIs you can discuss confidently sounds like Steve missed his calling in marketing, but then I realised he's just too honest for it

  • @blar2112
    @blar2112 10 месяцев назад +262

    full size performance cores running at 3.5w each is extremely impressive, this is borderline high performance big ARM cores

    • @Azureskies01
      @Azureskies01 10 месяцев назад +133

      Wendell (level 1 techs) found the EPYC 128 core chip was using just over 1w per core. ARM is dead as long as AMD keeps this up.

    • @blar2112
      @blar2112 10 месяцев назад +82

      @@Azureskies01 Wow
      I feel pity for intel trying all that finicky E core stuff and then AMD performs better in low thread tasks as they have the better performing cores and then again AMD performs better in high thread tasks as all their threads are all full performance cores...

    • @sryfshkbfyi
      @sryfshkbfyi 10 месяцев назад +27

      ​@@blar2112yeah, it seems silly, they just can't compete with the TSMC cutting-edge processes. And I see it as a generally bad direction of things, because consumers benefits from a healthy competition. Otherwise it's just a monopoly. I really want Intel to succeed in their ventures and be competitive. But it's seems like cutting-edge electronics fabrication is so expensive process, that it's will be impossible to push progress 'forever' without some sort of technology consolidation in a single entity.

    • @Splarkszter
      @Splarkszter 10 месяцев назад +67

      ARM is pretty interesting by itself. You wouldn't want ARM to die tho, there is a reason the x86 platform is focusing on efficiency.
      DON'T LET COMPETITION DIE, BE STRATEGIC.

    • @blar2112
      @blar2112 10 месяцев назад +14

      @@Splarkszter Agree, ARM is cool

  • @nathanfay1988
    @nathanfay1988 10 месяцев назад +32

    Sorry Steve, but the channel is called Gamer's Nexus instead of Productivity Nexus, so naturally we are drawn to gaming benchmarks :)

    • @GamersNexus
      @GamersNexus  10 месяцев назад +20

      Channel rename on April 1?!

    • @user-ke1gn3ql1g
      @user-ke1gn3ql1g 10 месяцев назад +4

      ​@@GamersNexus Nerds Nexus would be funny. Bonus points if you add this thing 🤓 lol

  • @sephondranzer
    @sephondranzer 10 месяцев назад +33

    Steve: I’m gonna call you all out…
    Me: Hold that thought GN, I need to skip to the gaming section

  • @urmensch12
    @urmensch12 10 месяцев назад +214

    The disclaimer doesn't stop me from wanting an 64 core threadripper for an gaming first Pc

    • @InternetListener
      @InternetListener 10 месяцев назад +57

      to run Flight Simulator below 2% cpu usage.

    • @hueanao
      @hueanao 10 месяцев назад +14

      I hope that in the future we see some games optimized to lots of cores.
      Would be interesting, for example, to have a massive grid on a racing game where each AI has it's own dedicated thread.

    • @aRealAndHumanManThing
      @aRealAndHumanManThing 10 месяцев назад +6

      ​@@hueanaoor using the efficiency core concept from Intel to add some sort of parallel computing ability to your cpu.
      Just imagine what graphics would be possible if you'd be able to outsource sth like ray tracing to mostly underused parts.
      I don't know enough about this stuff for more in depth ideas, but it seems like a logical point for improvement

    • @deansmits006
      @deansmits006 10 месяцев назад +12

      Or try an Epyx-X server CPU, has the extra L3 cache! It's gotta pull big gaming numbers, right? Right?

    • @xXx_Regulus_xXx
      @xXx_Regulus_xXx 10 месяцев назад +19

      "I know this product is not for my use case, but I want to signal to corporations that I'll buy it even though the value proposition is objectively bad"

  • @Warren_Elrod
    @Warren_Elrod 10 месяцев назад +265

    Hey GN, noob thought here, if you ran 2 tests at the same time, would that give different insight into the 32 to 64 core scaling?

    • @GamersNexus
      @GamersNexus  10 месяцев назад +316

      That's an interesting thought. Yes. It would definitely change the dynamic: If you were bound to 32 cores max in some application, you could theoretically run 2 instances of it and increase the throughput. Handbrake is a good example of this: You could spawn multiple Handbrake instances with more cores. Great question - thanks for posting!

    • @whycouldntthebicyclestandup
      @whycouldntthebicyclestandup 10 месяцев назад +1

      ​​@@GamersNexuscould you set core affinity in widows and run two or four "normal" all core benchmarks? Would be interesting to see if tasks are across chiplets etc

    • @Splarkszter
      @Splarkszter 10 месяцев назад +9

      Yup, virtualization is a very cool thing. Sometimes there may be some schenanigans but if you run a Linux based OS made for that you could run multiple single-core-heavy tasks at the same time for example.

    • @user9267
      @user9267 10 месяцев назад +1

      ​@@Splarkszter
      That would change boost behavior

    • @KenS1267
      @KenS1267 10 месяцев назад +2

      It's not really applicable to workstation CPU's, I'm not 100% sure why such high core count workstation CPU's even exist, but in servers virtualization is a prime use for such high core count parts. There are a couple of virtualization benchmark suites out there that might be relevant, assuming virtualization isn't disabled on these CPU's.

  • @blahblahblah1787
    @blahblahblah1787 10 месяцев назад +50

    Main benefit of Threadripper is the substantial bump in pci lanes and no longer having to deal with that "if I want to use 3x nvme drives the pci slot gets disabled" nonsense.

    • @deadmanshand4138
      @deadmanshand4138 10 месяцев назад +5

      This. The ONLY reason I would want the TR is for the lanes.

    • @TonkarzOfSolSystem
      @TonkarzOfSolSystem 10 месяцев назад +12

      Seems like there’s a hole in the market for “lane ripper”.

    • @vinnyveritas9599
      @vinnyveritas9599 10 месяцев назад

      That's a hefty price to pay for PCI lanes.

  • @Mpdarkguy
    @Mpdarkguy 10 месяцев назад +533

    I love how in less than 30 seconds you went from “wow it’s so efficient” to the LN2 tank.
    It’s a long way down that efficiency curve isn’t it lol

    • @GamersNexus
      @GamersNexus  10 месяцев назад +338

      But there's so much room to make it faster and pull 1200W! We have to do it!

    • @Benethen_
      @Benethen_ 10 месяцев назад +151

      ​@@GamersNexus finally a competitor to Intel's chilled 5GHz 28-Core CPU

    • @RadarLeon
      @RadarLeon 10 месяцев назад +11

      @@GamersNexus will be looking forward to this those numbers are going to be insane

    • @Mpdarkguy
      @Mpdarkguy 10 месяцев назад

      @@GamersNexusmake sure you guys try out cities skylines 2 for maximum efficiency benchmarking

    • @poppyrider5541
      @poppyrider5541 10 месяцев назад +1

      @@GamersNexus Predictions?

  • @GreySectoid
    @GreySectoid 10 месяцев назад +23

    Thanks for bringing back the code compilation benchmark, there are huge number of programmers who like their binaries served quickly.

  • @scottg7321
    @scottg7321 10 месяцев назад +18

    23:59 I am technically part of this industry as a quant. In this industry the models are sometimes by firms who need the calculations faster than other firms so they can arbitrage first. But more often than not other models are calculated with similar methods so I imagine these benchmarks are useful.
    Also, although there are many types of Monte-Carlo simulations, there is really only one type used for time series so the benchmark is probably using that one. However, I am not too familiar with the details of the benchmark you provided so I couldn't say anything for certain about it. Lmk if ya'll have any questions

  • @ellejharmsen4801
    @ellejharmsen4801 10 месяцев назад +6

    As a molecular simulation PhD, I also loved seeing these workstation tests included. Especially Lammps is really relevant for me. I actually think that the testcase in lammps is also molecular dynamics (just like namd). They just implemented some of the calculations/parallizing different.
    What might be important to note is that in my field we topically run simulation 3 till 5 times with identical settings, just different initial configurations (results here have statistical value). So to me seeing the results of the 32cores compared to the 64 cores set up as 2 simulations next to each other would be very informative as well. (Or both running 16 thread simulations up till the cpu is 100% loaded would be good too). I would be interested in what the CFD people here would think of that.

  • @GadgetryTech
    @GadgetryTech 10 месяцев назад +16

    Amazing video as always. AMD’s work per watt improvement has been amazing. I can’t wait to see what Zen 5 does next year!

  • @RFC3514
    @RFC3514 10 месяцев назад +9

    For 3D rendering tests, and since nearly all 3D renderers these days support network rendering, it would be interesting to see how one of these systems compares (in terms of price, performance and power consumption) with two or four render nodes using regular AM5 CPUs.

  • @khatdubell
    @khatdubell 10 месяцев назад +2

    Glad to see the compilation benchmark.

  • @johiahdoesstuff1614
    @johiahdoesstuff1614 10 месяцев назад +17

    A 96 core cpu would be pretty incredible for server purposes, yeah? Like for Eve Online, you could host 2 solar systems on a thread each on one core for 192 systems on a cpu with tons of headrooms for having many players connected?

    • @dead-claudia
      @dead-claudia 10 месяцев назад +7

      precisely why epyc cpus are so wildly popular for servers with high compute requirements 🙂

  • @FlyingShoe
    @FlyingShoe 10 месяцев назад +3

    Thanks! Have been waiting on the new Threadrippers for a while now.

  • @vali69
    @vali69 10 месяцев назад +36

    You know maybe the best benchmark for these cpus would be compiling gentoo.

    • @GamersNexus
      @GamersNexus  10 месяцев назад +21

      We can look into that!

    • @arek314
      @arek314 10 месяцев назад

      @@GamersNexusyes, please!

    • @reav3rtm
      @reav3rtm 10 месяцев назад +3

      As Gentoo user (and developer).. many compilation tasks are not well parallelized. For many projects a lot of time is spent in configure phase that is sequential. And packages alone are merged sequentially. I would say compilation tests are enough. Whole distro compilation is maybe use case for distribution maintainers but it's really very rare use case for typical user. Some really large projects that can benefit, like Chromium, require also lots of RAM for that many parallel compilation tasks.
      (however I don't want to be gatekeeping)

    • @vali69
      @vali69 10 месяцев назад +1

      @@reav3rtm to be honest I was just thinking about how funny it would be to have that as a benchmark, since yeah it's pretty much a long compilation test that can take days on some less powerful/mid range cpus and depending on packages installed, so I shared that thought. You don't want to know how shocked I was that gn actually responded and even said they'd look into it!

    • @arek314
      @arek314 10 месяцев назад +3

      @reav3rtm while it might be true, I know from the Internet what's the usual Gentoo specialist outfit. And I really want to see Steve in knee high socks..

  • @TheNerd
    @TheNerd 10 месяцев назад +14

    I think Photoshop realized the power and speed of 64 Zen 4 cores and said: "Not even Chuck Norris has this much Power" and closed itself in an early defeat.

    • @desertfish74
      @desertfish74 10 месяцев назад +7

      Adobe holding back the industry for decades now and going strong!

  • @Zosu22
    @Zosu22 10 месяцев назад +1

    Woah this is the first time I’ve seen a corrections section integrated into the RUclips description

  • @TomaszWiszkowski
    @TomaszWiszkowski 10 месяцев назад +6

    Thank you for three excellent video! It's super exciting to see the return of HEDT CPUs from AMD.
    I really miss the 3990X on these charts.. 3970X vs 7980X is not exactly the right comparison generation-wise...
    I appreciate the highlight of the 3970X v 7970X, which kind of captures what could be expected.
    Fantastic video!

  • @moto3463
    @moto3463 10 месяцев назад +2

    As an Intel fanboy over the years I’ve woken up. Looks like AMD will be taking the cake for future proof CPUs for gaming, ect. My next build will be AMD after my 13900k becomes obsolete

  • @Artholos
    @Artholos 10 месяцев назад +3

    Considering how the prices of the 3960x and 3970x have come down and how remarkably efficient they still are, getting yourself a used Threadripper has insane value proposition right now!

  • @Petercubic
    @Petercubic 7 месяцев назад

    As someone who purchased the 3970X and is planning on building a 7970X machine, one thing that was interesting to me was the inflation comparison between the two parts. I know it was only a couple of years, but I purchased my 3970X in March 2020. The comparative purchasing power today would be about $2,370. So, yes it's more expensive, but so is everything I guess. Just an interesting comparison to make when you're looking at high value parts where the small amount of inflation actually shows up in a reasonable way. Thanks for the great content as always!

  • @ZWortek
    @ZWortek 10 месяцев назад +3

    One thing I've been wondering is why the 7950X3D isn't on the blender efficiency charts, especially when it appears that it would be as efficient if not more than the 7800X3D. It pulls ~62% the power of the 7950X and is about 7% slower than it in Cinebench.
    Using the 7950X3D review video for efficiency scaling, it would sit at basically 17.6Wh which is insanely efficient for consumer parts.

    • @AK-Brian
      @AK-Brian 10 месяцев назад

      I was going to ask about the same thing. It looks like it was omitted unintentionally, but would otherwise be the top scorer for efficiency in that test after the new Threadripper parts. I'd love to see where it officially ends up with that test metric.

  • @RHODEZ
    @RHODEZ 10 месяцев назад

    The Level one Jab is the best part of the video, we all love you Wendell

  • @gabrielecarbone8235
    @gabrielecarbone8235 10 месяцев назад +6

    my god TSMC and AMD are really killing it

  • @ojonathan
    @ojonathan 10 месяцев назад +2

    Amazing tests, it's interesting to see how well those CPUs perform on code compilation, because despite being a highly “parallelizable” workload, at the end of the compilation pipeline, in which you have to assemble a single artifact (or a couple of individual artifacts), compilers and linkers are very reliant on the performance of individual cores.
    And those CPUs are clearly worse at single-core performance (not only because of power and thermal concerns, but scheduling gets harder), but they save so much time by going through the parallel bits extremely fast that, even if those CPUs take longer to go through the non-parallel bits, it's still faster and more efficient on the job than the lower core-count counterparts.
    It still like you said, you have to evaluate whether your workload takes advantage of it or not, and also be aware that there's a difference of highly multithreaded workload and highly “parallelizable” workload, the latter benefits way more from high core counts than the former, which commonly has more interdependency between the threads, so one thread stalling may negatively affect the others.
    Also I'm a little curious to whether those CCDs are as powerful as the customer lineup or not, how the gaming performance fare if you were up to have only one CCD enabled. Yes, it's crazy to buy this monster and proceed to disable all but one CCD, however the question is whether those cores can hold higher clocks for longer if they were not power and thermal constrained.

  • @evanscott6323
    @evanscott6323 10 месяцев назад +3

    It might be worthwhile to include Apple systems in benchmark showdowns of this type. Creative and high end workflows are one of the areas Apple targets with their chips, and seeing how they compare to threadripper in various tasks, especially Photoshop and Lightroom, would be really instructive in determining the real value proposition of their systems.

    • @Exoleres
      @Exoleres 10 месяцев назад +1

      I'm not sure it matters that much since the takeaway from any Photoshop chart is always that Photoshop is a monster of bloated spaghetti code that always scales poorly.

  • @vibonacci
    @vibonacci 10 месяцев назад +1

    Please keep these Threadripper reviews in. Can't wait for 7995WX review and to see what benefits those higher grade chips actually provide and what influence memory channels have. At $11,000, that's a steal.

  • @ChinchillaBONK
    @ChinchillaBONK 10 месяцев назад +1

    GN : Guys! Stop looking at gaming benchmarks timestamps for these workstation workloads.
    Us : Don't mind us. We just want to know the bare minimum standard we need while we game, render 3D animations, compile handbrake videos, run OBS in the background, etc etc, ALL AT THE SAME TIME.

  • @guy_autordie
    @guy_autordie 10 месяцев назад +51

    I do think that the 24 cores will be usefull for cities:skyline2 But yeah, the others games, the 7800x3D is enough or better.

    • @trucid2
      @trucid2 10 месяцев назад +10

      Better. 7800X3D is king.

    • @spacebound1969
      @spacebound1969 10 месяцев назад +9

      ​@@trucid2thread ripper is awesome but this chart just reinforces how amazing the 7800x3d is for simulation games. It's really it's own class.

    • @GamersNexus
      @GamersNexus  10 месяцев назад +63

      Cities Skylines needs an RTX 7090!

    • @worried-woemwoem
      @worried-woemwoem 10 месяцев назад +2

      Cities Skylines 2 seems to love higher core count CPUs, there was a post where they had a city with 600k pop with a 7950x3d with almost no slowdown in the simulation even at 3x speed

    • @tanthokg
      @tanthokg 10 месяцев назад

      I thought that game was GPU-intensive and heavily single-threaded. Or so I heard

  • @Pitipicqou
    @Pitipicqou 10 месяцев назад +1

    Thanks for the detailed review as always! Are you planning on a thorough review of the 7960X as well?
    Especially if you're interested in video production workloads, the 7960X looks on paper like it might hit a sweet spot: you get the lanes of Threadripper for a second GPU, dedicated video output, high speed networking or storage, and higher clocks than the 7970X or 7980X, at a much lower cost (according to Puget, the 5965WX was almost tied with the higher tier 5th gen Threadripper for content creation).
    Those features would make it the ideal CPU for a lot of video professionnals.
    The inclusion of Resolve benchmarks would also be really cool, although your test suite is already a lot of work, it might help out quite a bit of people

  • @hexalyte
    @hexalyte 10 месяцев назад

    6:59 Look how excited Steve looks when he pulls out the whiteboard LOL

  • @Azureskies01
    @Azureskies01 10 месяцев назад +20

    AMD's server products are the most efficient chips to have ever been made. They are burning clean and running on all cylinders.
    Now if only their GPU division could be as amazing.

    • @dogdie147
      @dogdie147 10 месяцев назад +3

      All of the money are being gobbled up by the Ryzen team. The Radeon team is so incompetent they couldn’t even market a main seller feature right which is fking sad😢

    • @Azureskies01
      @Azureskies01 10 месяцев назад +3

      @@dogdie147 They are doing what they need to do on the hardware side even with RDNA3 not being as proficient as they might have projected. Their driver (and whole software side) is the real lacking division.
      My 7900XT shouldn't be pulling over 100 watts when I wanna load up Oblivion without mods and it is only doing that because the card ....for whatever reason needs to run its VRAM at full frequency (which causes the card to draw ~90-100 watts itself).
      Hell the reason why the the idle power consumption was an issue (and still kind of is) was because of the cards not clocking down the VRAM lower than 909 (or max frequency) mhz

    • @bocahdongo7769
      @bocahdongo7769 10 месяцев назад +2

      Those RDNA GPU is actually CRAZY efficient if you know how to do undervolting
      If

    • @sammiller6631
      @sammiller6631 10 месяцев назад +1

      AMD's GPU division could be running on all cylinders and most _gamers will still not buy anything but Nvidia_ because Nvidia's manipulative marketing is very strong. The fear of missing out is too strong for gamers to resist even at the lowest end where Nvidia fails to stand out.

    • @Azureskies01
      @Azureskies01 10 месяцев назад

      @@sammiller6631 Radeon didn't have to release FSR3 and could have put all that time and energy into making FSR2 a better DLSS. They didn't because they are chasing nvidia again.
      Radeon didn't have to come out with chiplets when they clearly weren't ready (idle power draw, multi monitor power draw, crashing in 10 year old games like in FFXIV-something i personally had happen for the first 5 months of owning my 7900XT). They did anyway.
      Radeon never misses their feet when they shoot for the moon.

  • @WastingSanityGR
    @WastingSanityGR 10 месяцев назад +1

    3:35 We have been called out!! No, but seriously. He has a point.

  • @RadarLeon
    @RadarLeon 10 месяцев назад +4

    how to club intel over the head in the most brute force way possible

  • @cburgess7
    @cburgess7 10 месяцев назад +1

    For $5000, you can't just buy any computer for that cost, you can buy a full, top of the line gaming setup.

  • @movax20h
    @movax20h 10 месяцев назад +1

    7980X is clearly memory bandwidth limited in many of these benchmarks. You can see sometimes zero scaling, despite the workload being scalable well (things like parallel decompression, rendering), in places where there is less memory bandwidth need, like compilation or compression, it scales way better (and remaining difference can be explained due to lower base/boost clocks, and not perfect scaling of the benchmark itself - i.e. linking during compilation)

  • @iLegionaire3755
    @iLegionaire3755 10 месяцев назад +1

    Here’s hoping you get to review the 7995wx Threadripper PRO, after the overclocking sessions!

  • @kintustis
    @kintustis 10 месяцев назад +5

    3.5w per core? I'd love to have a 56w 16-core on desktop without manually stepping down the (poorly documented) volt-frequency curve in bios

    • @Sunlight91
      @Sunlight91 10 месяцев назад +9

      Just buy a 7950X and activate the 65W-ECO mode.

  • @Rhynri
    @Rhynri 10 месяцев назад +2

    I'd really love to see the 7960X benchmarked with these per-core numbers broken out like you did here, and an idle wattage for all of these chips. I run a gaming VM home server setup so energy efficiency is good to know.

  • @572089
    @572089 10 месяцев назад +2

    I'd love to see benchmarking on VMs running on the threadripper chips. i can image another use-case for them is running small Slimclient servers for small companies who don't wanna dish out 100k for an enterprise solution, but who still need decent processing power for say, 8 different stations simultaneously.
    with 64 cores you could be running 9 different "6 core" MVs natively off hardware with a whole 10 cores leftover for background tasks. not to mention all the PCIE lanes that could be running storage in RAID to making disk writing for the VMs redundant and instantaneous. that could be a gamechanger for small businesses.

  • @olnnn
    @olnnn 10 месяцев назад +5

    Wonder how this stacks up to high core count ARM CPUS like the Ampere Altra since they've started experimenting with making workstation stuff based around it as demoed by Jeff Geerling (and for that matter Apple's M3 Ultra whenever that comes out)

  • @TriMiro8107
    @TriMiro8107 4 месяца назад

    I know I'm late to the show but I really appreciate these reviews. Many of us that do software development (compiling) or other types of productive work in addition to gaming need the benchmarks like Chromium compile. Even if we don't compile, it is representative of a class of workloads that isn't just gaming. Same with blender and 7zip. Thanks for doing all the work and cheers from a copper cup.

  • @Jordan_StorageReview
    @Jordan_StorageReview 10 месяцев назад +1

    I put mine on LN2. You and Bill better have some good bin's ;)

  • @qfan8852
    @qfan8852 10 месяцев назад +1

    I do find that the 12 cores in my Ryzen 5900x are barely enough now. Some games are starting to utilize 8-cores, and the remaining cores have to handle a lot from various other background software I use. 16-core will be the minimum next time I upgrade. A low end Threadripper still can be an option.

  • @DemonicPoker
    @DemonicPoker 10 месяцев назад +7

    Another great and neutral informative review. I'm one of the people needing these beasts for daily computations, so I'm excited to learn everything before eventually ordering one. Would you have the possibility to make some Passmark tests with the 7980x and 7970 as these scale very well how my own workload will fare against other cpu's. Doing a lot of 'monte carlo' style based works, so if it's similar to the financial monte carlo you ran, it's super promising!

    • @GamersNexus
      @GamersNexus  10 месяцев назад +4

      Can you give some more background on why/how Passmark represents your work? That'd help us in planning (and explaining it if we introduce it). Can you give an example of how your work relates to some kind of real world "output" or result? What does it mean for you if a CPU is faster? That'd help a ton. Thank you!

    • @DemonicPoker
      @DemonicPoker 10 месяцев назад +1

      @@GamersNexus Myself and a lot of 'pokercolleagues' run a lot of computerized simulations, some kind of Monte Carlo algorithms to build our strategies. When we look at the internal benchmarks solves we run in our software, we see that they relate very much to the passmark scores in general. Off course there are some small exceptions (as always) - usually related to the number of cores/threads. ("Generalized" our simulation softwares run faster the more cores as a very simplified explanation but the scalability relates very much to what we see in passmark - besides a few exceptions)

  • @Fanaticalight
    @Fanaticalight 10 месяцев назад

    Adobe Try To Be Efficient Challenge (Impossible). Honestly, it boggles my mind to see how efficient this CPU for production workloads right out of the box.

  • @ImPDK
    @ImPDK 10 месяцев назад

    Thanks for testing such a wide range of software. As the "PC specialist" of my friend groups it's good to be able to give more informed advice

  • @L3v3LLIP
    @L3v3LLIP 10 месяцев назад

    You just sold another Threadripper for gaming. Thank I will enjoy writing emails, playing excel and word on it!

  • @cIappo896
    @cIappo896 10 месяцев назад +2

    HEDT's value isn't raw performance, but time. If a job takes you 8 hours, and you can save 1/3 of the time, you can do 50% more work.

  • @zpd8003
    @zpd8003 10 месяцев назад +10

    Please, GN, compile and post your complete benchmark charts on your new website! Every review shows partial charts that include only a (random) selection of models that you've reviewed, which makes sense for presentation purposes but users want to have a complete reference SOMEWHERE that they can use for their own comparisons. This is especially important for PC cases and CPU coolers, where some very old models still outperform many new ones. At times I've had to open several different old reviews to look at partial charts in order to get a sense of how one model compares to another, and it can be frustrating. 🥺

    • @pachete.
      @pachete. 10 месяцев назад

      They will in a weeks time, they said in their video about webiste

    • @zpd8003
      @zpd8003 10 месяцев назад

      @@pachete. Are you sure? I know they'll post their old reviews, but that's not what i'm talking about. I've never heard them say they'll be posting complete charts, but maybe I've missed something.

    • @pachete.
      @pachete. 10 месяцев назад

      @@zpd8003 ruclips.net/video/Mrdw1fiqPmI/видео.html

  • @shaneeslick
    @shaneeslick 10 месяцев назад

    G'day Steve,
    I watched the Livestream first, not only was it lots of fun but also BIG THANKS to Amit & Bill for their time answering technical questions.
    Also as you mentioned it in the livestream it would be really cool if you did make the GN Logo Blender test available for us to use at home so then we can test our CPUs that are not on your list (like my Athlon 200GE) to see how terrible they are at rendering for a laugh😁.

  • @blackbird42
    @blackbird42 10 месяцев назад +3

    I must say, it is nice to see those units tested with more "industrial" software, like FEM etc. The regular test suite just does not show what are these CPUs capable of in my opinion.

  • @kennethmadsen6474
    @kennethmadsen6474 10 месяцев назад

    I love Threadripper CPU's. So glad they are coming back.
    I bought a first gen Threadripper, and it's still running in my home server.

  • @Sgt_SealCluber
    @Sgt_SealCluber 10 месяцев назад +3

    Hmm, I'm wondering how these would perform in games if you split them into groups of 8 cores with a video card. In say a really niche case of workstation by day, entire family gaming rig by night.

    • @bocahdongo7769
      @bocahdongo7769 10 месяцев назад

      Absolutely could
      But depend of motherboard and setup anyway

  • @MaxwellHay
    @MaxwellHay 10 месяцев назад +2

    Arm based cpu are getting really interesting now. Please add them to the testing list

    • @dead-claudia
      @dead-claudia 10 месяцев назад

      they're not quite here for the workstation tho, so it's a bit early imo

    • @andersjjensen
      @andersjjensen 10 месяцев назад

      Until there's a reliable and fast x86 emulation layer for ARM then that's not really possible since the vast majority of software used in these tests are native x86 applications.

  • @WhoAreYouQuestionmark
    @WhoAreYouQuestionmark 10 месяцев назад +3

    How do you guys verify that ECC is indeed enabled and functions correctly on a software side?

  • @JohnCarter04
    @JohnCarter04 10 месяцев назад

    CPU review with a WHITEBOARD and a LN2 OC livestream in the same day??? Y'all are crazy. Love the content, to everyone over at Gamer's Nexus: thank you!

  • @Phaevryn
    @Phaevryn 10 месяцев назад +4

    But can they play Crisis?

    • @GamersNexus
      @GamersNexus  10 месяцев назад +13

      Haha, just wait until they have enough cache to fit Crysis in cache.

    • @hueanao
      @hueanao 10 месяцев назад +1

      ​@@GamersNexusnow that you said it, I wonder if a X3D Threadripper would make sense.
      Iirc, in Ryzen, the X3D is only useful for gaming.

  • @ninepoints5932
    @ninepoints5932 10 месяцев назад +1

    As an AMD shareholder, I hope every one of you viewers builds a new threadripper machine to play games. Preferably one new TR machine per game.

  • @IndellableHatesHandles
    @IndellableHatesHandles 10 месяцев назад +9

    A Ryzen 7 1700x is worth about $50, while a similarly-aged Threadripper costs about $30 more _and_ requires a special motherboard.
    In conclusion, you might expect that a Threadripper part would be good for gaming after a few years, but given the actual cost to get one, you'll always be better off with a newer Ryzen 5 or i5.

    • @GamersNexus
      @GamersNexus  10 месяцев назад +12

      Well, not always. For mainstream use, definitely. But there are professional use cases where current-gen R5/i5 parts just won't do what you need in heavy enough workloads.

    • @bocahdongo7769
      @bocahdongo7769 10 месяцев назад

      For some people, nah
      Those PCIE is really real estate, you can load fuk ton of PCIE device without worrying about which one goes which and gets disable or whatever

    • @bocahdongo7769
      @bocahdongo7769 10 месяцев назад

      ​@@GamersNexusdo wish those PCIE gen 5 motherboard can do super-split PCIE that turn into lower PCIE version but double the lane.
      Like, you got cheap HEDT there

    • @IndellableHatesHandles
      @IndellableHatesHandles 10 месяцев назад

      @@GamersNexus That clarification is important. I meant for gaming, of course, and made that comparison because only older Threadrippers would have a similar platform cost to an i5 or Ryzen 5. I guess I didn't make that entirely clear.

    • @guiguipau
      @guiguipau 10 месяцев назад

      @@GamersNexus Yes but : you need more compute power? Buy anything - including EPYC or XEON - for pro use that matches your compute need and fits your price range, and get a mainstream CPU - Ryzen, i5, whatever - for everyday purpose. It will be both cheaper and a better experience.
      HEDT is dead, and while these TR 7000 and 7000 PRO themselves are good, the cost AND lack of versatility of the platform, mostly through the absurdity that TRX50 and WRX90 are compared to any decent EPYC 9004 mobo for instance, render the platform a basic scam.
      I consider HEDT dead for now, and if it weren't for the scam that DDR5 currenly is (come on, it's been 2 years and we don't even have 64 GB mainstream udimms) and if we at least had these dimms, where you could enable 256 or 512 GB standard DDR5 in 4 slots, this would not even be a discussion : you need real professional feats like tons of PCIE lanes, compute power and at least 1 TB ram : get a server CPU and board. You're a prosumer doing rendering, huge labs and prototypes? Get a Ryzen 9 or an i7 / i9 and 256 / 512 GB DDR5.
      Let's be real : with CPUs normally progressing in terms of compute power, by the time Zen 5 / Arrow Lake or Zen 6 / Nova Lake are out, if 64 / 128 GB DDR5 udimms are out, this pseudo HEDT is probably dead. People who need more of everything will go to server chips. People who only need more ram will stick to mainstream.

  • @robertpearson8546
    @robertpearson8546 10 месяцев назад

    Correction. "Before VRM efficiency losses" should be "Before VRM inefficiency losses". The 1920 buck converter is about 60% efficient. That means the VRM generates almost as much heat as the CPU. You should look into water-cooling your VRM. Of course, the 2011 Ćuk-buck2 has an efficiency of around 99%, is smaller and costs less than the 1920 design. But that would require the board designers to learn some of the 100 years of improvements in power conversion.

  • @Qyngali
    @Qyngali 10 месяцев назад +3

    Small point: The TR parts don't have more cache than the regular 7000 series, the total cache doesn't matter. What matters is cache per core, and that is identical except for the X3D parts. If they decide to release X3D TR... yeah lol. But those won't have more cache than the 7800 X3D per core either.

    • @5467nick
      @5467nick 10 месяцев назад

      You are correct. I wonder if the reduced cores per CCD models (or if disabling cores in each CCD manually) would make a difference. The 24 core part has the same L3 as the 32 core part, granted that's still far from the cache per core of the X3D CPUs. I doubt AMD will sell X3D threadrippers since productivity workloads don't tend to benefit from it. Then again, AMD does sell a few EPYCs with the extra cache, so who knows? Maybe someone will at least try to get one of those EPYCs into a board that allows overclocking. Some people did it with the older EPYCs.

    • @TigonIII
      @TigonIII 10 месяцев назад

      @@5467nick I just saw der8auer's video before this one and he tries out some CCD and core configurations. Definitely worth a watch.

  • @haikumists1115
    @haikumists1115 10 месяцев назад

    I confess. I watch the gaming segments for Threadripper reviews because I think it's fun to see how a CPU not meant to handle these kind of workloads performs. But it would be amazing to get to work on something that can use a Threadripper lol.

  • @AndersHass
    @AndersHass 10 месяцев назад +3

    Who would have thought the most popular part of a GAMERS Nexus videos would be GAMING, lol

  • @ryderbrooks1783
    @ryderbrooks1783 9 месяцев назад +1

    "Roofline model" is a better way to think about benchmarking. FMA is the speed limit
    Everything boils down to a ratio of how many arithmetic operations you do per byte transferred. All the different cache architectures are just an attempt to deal with the transfer side of that equation
    Also, your charts would get MUCH more interesting if you compiled for the the specific CPUs as there are certain computations that would expose orders of magnitude differences between products
    an example would be: multiply and add half floats (f16) and see what happens when you compile for different targets as the buffer size increases. (xeonW vs TR gets interesting..)

  • @garrettkajmowicz
    @garrettkajmowicz 10 месяцев назад +3

    Don't forget the additional benefit of the increased I/O. Would some of these GPU tests be able to run with multiple GPUs installed in the system?

    • @roanwestraat9604
      @roanwestraat9604 10 месяцев назад +4

      There is no way to leverage more than 1 GPU for gaming besides the odd accelerator here and there. Its a dead concept.
      Now if you are looking to run some parallel workstation workloads over multiple GPUs, now we are talking options.

    • @bocahdongo7769
      @bocahdongo7769 10 месяцев назад

      Those app that utilize full GPU computing actually don't really care about PCIE speed since it doesn't need to communicate that much with CPU during process, and subsequently CPU load
      Because of that, the performance is just scale of those GPU performance anyway.

  • @benjaminsonksen4451
    @benjaminsonksen4451 10 месяцев назад

    Desire to use server or workstation boards and CPUs for gaming is courtesy of EVGA when they made their SR-2 Classified dual 1366 socket server board capable of using dual Intel XEON processors, 12 dimms of memory, and up to 4x graphics cards. The other cool dream for cool-factor at that time was the Supermicro quad socket AMD Opteron server board. For gaming, my i5-6600 mini-ITX with GTX 1080 Ti still runs most games at high settings on 4k, and zero issues with 3D design rendering. The idea of having a T-rex monster system is always going to be way more awesome than a little gecko, though.

  • @XenoGraphica
    @XenoGraphica 10 месяцев назад +4

    Please can you guys figure out a viewport test for these CPU's for blender. No one in their right mind spends this type of money on a CPU and doesn't have a 4090 to render on. People who use Blender want to know VIEWPORT performance: Can the CPU playback an animation in solid mode at the required fps. How are fluid bake times? How are other physics bake times, etc. I really appreciated your reviews but render times are useless.

  • @rezz77s
    @rezz77s 10 месяцев назад

    first thing I did was look at the scan bar and saw that huge peak in the gaming benchmarks

  • @doniscoming
    @doniscoming 10 месяцев назад +4

    I think it would be cool to compare that to M3 Ultra or whatever the best apple silicon thing is right now - kinda like best of what prosumers can expect on both ends :)

  • @schwarziex3563
    @schwarziex3563 10 месяцев назад +1

    Love it that you include Stellaris in the Gaming Test. Thats the only game that constantly forces me to upgrade my CPU. I really hope a Ryzen 8000X3D will bring great improvements

  • @paxdriver
    @paxdriver 10 месяцев назад +1

    24:05 the financial and probability simulations are very important for machine learning - whether that's training or inference, depending on the model architecture. It's a very big deal for ML but even bigger for time series data analysis in market dynamics when you assess a large system of signals and indicators which compare to one another and are compared across several time scales in addition to many different products whose price action is being recorded. Log complexity on millisecond updates across dozens or hundreds of items needs a ton of processing if you don't want to miss out on an arbitrage or trade intraday algorithmically.

  • @ChrisGR93_TxS
    @ChrisGR93_TxS 10 месяцев назад +5

    lets see how long this one is gonna stay supported

    • @teamplays2252
      @teamplays2252 10 месяцев назад +2

      As compared to what exactly, Intel's power-hungry heaters performing half or less? Warranty is all the support I need, gimme performance 🤩

    • @andersjjensen
      @andersjjensen 10 месяцев назад

      AMD has made no claims about socket support. Expect to retire the motherboard when you need something faster. If you don't like that then you're SOL as Intel's upcoming Fishhawk Falls Refresh will also be EOL after this "generation" (it's basically "14th gen Xeon W").

  • @AdamariMedia
    @AdamariMedia 10 месяцев назад

    Excited for the livestream later today!

  • @JP-1962
    @JP-1962 10 месяцев назад +2

    "Everybody jumps to the gaming [section of the video]" Well who'd have believed that people subscribed to a channel called GAMERS Nexus would have done that? 😁🤣

  • @JamesHillery
    @JamesHillery 10 месяцев назад +1

    At least for me, the interest in "checking the gaming scores" for these cpus is the desire for a single computer that can handle heavy workloads while also still being great for gaming. I'd throw a decent chunk of cash at something that can do both without compromise. These new threadripper cpus had seemed interesting in that they might have opened up a new pricier option for people trying to get both gaming and workstation performance, but it looks like you give up a lot on the gaming side of things here despite them being very impressive workstation cpus. At some point it starts to make sense just to build two completely separate machines dedicated for each use case, but, aside from that, the sweet spot for a single machine option is likely still the 7950x3d or 7950x (depending on your tolerance for tinkering with core affinity settings for more gaming perf).

  • @TMDragoncro
    @TMDragoncro 10 месяцев назад +1

    Thanks Steve, now i know i can play games on Threadripper.

  • @MelroyvandenBerg
    @MelroyvandenBerg 8 месяцев назад +1

    Thanks was very helpful for me. I'm a software engineer doing cross-platform compilations. And other compile work. even Linux kernel.

  • @kungfujesus06
    @kungfujesus06 10 месяцев назад +1

    Jesus, that was a confident LN2 pour with bare hands.

    • @zachcollins4442
      @zachcollins4442 10 месяцев назад

      You can pour LN on bare skin as long as it runs straight off. LN causes problems when its held against the skin, as in submerging a hand or soaking your clothes.

    • @andersjjensen
      @andersjjensen 10 месяцев назад

      Look up "the Leidenfrost effect" if you want to know why he (for good reason) wasn't the slightest bit worried.

  • @EmblemParade
    @EmblemParade 10 месяцев назад

    I think people are jumping to the gaming benchmarks because this is much of your audience. Your channel is called "Gamers Nexus" for a reason. :) These people just don't need or buy workstations, but they are curious about technology and what it can do.

  • @nster3
    @nster3 10 месяцев назад

    I'm going to buy a 7960X and TRX50 board for gaming and watching RUclips just to spite Steve :D I'd love to see some VM/Hypervisor and database workloads for ThreadRipper!
    I miss the glory days of HEDT gaming, my i7 920 D0 stepping and OCed samsung DDR3 half-height RAM was a BEAST, my 3930K and TR 1920X less so but we don't talk about those. I'd say ThreadRipper is the competitor for Xeon W-2000 series and TR Pro competes with W-3000 series, especially looking at specs and pricing.

  • @joshduffety-wong9618
    @joshduffety-wong9618 10 месяцев назад +1

    24:41 for the gaming benchmarks! Never mind that other unimportant stuff ;)

  • @powerpower-rg7bk
    @powerpower-rg7bk 10 месяцев назад +2

    Wow, that efficiency on the TR 7980X. That low v-core helps alot here and it is very, very good. I hope you're going to be overclocking that particular chip later today as that looks like a good sample.
    Why were some games ran with ECC disabled? I see them on the charts but didn't hear them mentioned. There was some talk of the how these are high cache count CPUs but the lower core count V-cache enabled parts, the Ryzen 7800X3D in particular, were able to trounce them in gaming. I would be that you can get V-cache like gaming performance on the TR 7980X parts but you would need to disable 56 of the 64 cores and in a way where only one core per CCD is active. That'd actually give the TR 7980X more cache per core than even the Ryzen 7800X3D. Really curious how would compare as there would be the oddity that the L3 cache is not directly shared on the TR 7980X in that configuration (cache data duplication, more die-to-die coherency traffic etc.).
    I'm really curious how the TR 7960X fairs. I'd be a nice jump to the higher PCIe lane platform without the premiums CPU price (premium being compared to the TR 7970X and 7980X parts).
    As for testing, I am curious what the TH Pro 7995X could do on this platform. Granted that is an 8 memory channel part on a 4 memory channel board but it is a valid and supported upgrade path. So while AMD only had a single generation of parts for TRX40, the TRX50 platform will have an upgrade path due to being able to run a WRX90 chips in 4 channel configurations.
    As for the future, I'm optimistic that there will be a second or even a third generation of WRX90 parts. I suspect there will be a second generation of TRX50 parts. The main thing I'm hoping for are V-cache enabled models as the performance uplift it brings is generally stronger than additional cores, especially for more day-to-day tasks. HEDT and workstations are performance monsters when loaded but v-cache gives a system that extra snappiness when creating the models, moving the data around prior to hitting run on a long simulation.

  • @leviathanpriim3951
    @leviathanpriim3951 10 месяцев назад

    watched this after L1T Wendells vid, interesting to see the different testing and feature explanations. this set looks great for this review as well

  • @jonathandunn9302
    @jonathandunn9302 10 месяцев назад

    I would love to see a Unity compilation benchmark, I spend an hour a day looking at that loading bar everytime I make a code change...

  • @frank1380
    @frank1380 10 месяцев назад

    The channel name is Gamers Nexus and yet we get a dressing down for jumping to the gaming section of the reviews. Gamers gonna game.

  • @vkiwi2429
    @vkiwi2429 10 месяцев назад

    Thanks Steve n team, this helps heaps with my cost benefit forms for management. they want to see number/bar go up mean workers work harder/less downtime

  • @theopdiamond8349
    @theopdiamond8349 10 месяцев назад +1

    wow i have never been this early, i'm gonna predict the future and say great review steve! :)

  • @TommyMcD
    @TommyMcD 10 месяцев назад

    I have no need for threadripper but love to see these videos.

  • @CommissarHolt_
    @CommissarHolt_ 10 месяцев назад +1

    GN: They are NOT for gaming. DO NOT buy them for gaming.
    Viewers: Yeah... but what if.....

  • @McTroyd
    @McTroyd 10 месяцев назад

    Appreciated the HPC (high-performance compute*) benchmarks being added to the mix for the high-core CPUs. I, too, have no idea what most of them mean in real-world terms, but it's cool to see some of the numbers these CPUs might actually be crunching. If you come across someone that _does_ know about these things, is doing something cool, and wants to show off the compute, I wouldn't object to seeing that. HPC is cool. (*-defined because too many things use the letters HP)

  • @endreh8406
    @endreh8406 10 месяцев назад

    Lmao the call out at 3:20 excellent. Well done guys