For those wondering, the specs of the highest of the PRO lineup is: 2TB of ECC RAM 96 Cores / 192 Threads 128 PCIe 5.0 lanes $9,999 Note: No, I'm not going to buy one, I can barely afford rent.
Meanwhile I love what Apple's smaller core-count baseline M chips can do in web dev... In web dev we actually mostly want super fast single-core performance due to everything being largely single-threaded. But in everything else? Yes. Threadrippers kick ass.
@@shapelessed I’m actually a huge M-series Mac fan myself! I mean, if you’re like me and you got the full M3 Max the minute it was available, then you have almost desktop i9 performance in your lap right now. The thing is insane. Threadripper level? Not quite, but 16 cores up to 4.1 GHz with VERY wide decode per clock cycle? It punches well above its weight to say the least.
@@shapelessed what's the deal with web? Application running on web tech typically should be built so that average consumer pc user can run it with no issues. And then if you need anything more heavy multithreading is possible on web though not as efficient due to costly cross-thread communication in javascript.
It's insane that AMD is actually bigger (money wise at the current moment) than Intel is. Lisa Su has to be one of the most successful CEO's in modern history.
Intel has a lot of business outside of CPUs, and still owns massive market shares there anyway. AMD is financially much smaller than both competitions Intel & Nvidia, something that makes wins against them impressive.
The competition is actually so disappointing rn, im glad AMD is still actually making improvements, although it would be nice to see the prices come down even just a little
Until intel gets something remotely competitive then they can bank the profits. Though the reality is that they probably will make more pro and epyc chips than non pro which bumps the non pro price up.
His probably going to say is not going to use it, then one year later we learn in wan-show that he's been using it for 6 month just because "no one was using it". Yeah sure linus, your editing team "couldn't" use it right.
When i knew nothing about computers 7 years ago, your videos were enjoying. Then i got my degree in IT, and you videos were still enjoying. Im currently in my second year of computer science and quess what. I still find your videos wildly enjoying! Achieving such depth, while still keeping the consepts easy to understand, is crazy. Great work!
I'm thoroughly impressed with AMD's threading prowess here, really ripping those threads! Love how they're still pushing the high-end even in this class of CPUs. Besides, as you said, time equals money, so those extra threads could be a lifesaver. And being an AMD fan, these prices still feel like a justifiable investment. Can't wait to see how the landscape changes when Intel brings their mojo back.
As an AI developer, and sometimes training can take days or even weeks, something powerful like this would be amazing. Not all AI is trainable on gpus.
That's the first thing that came to my mind... Even if you're training on GPU's, the CPU still needs to do a significant amount of work. Especially if you have say... A server full of GPU's all running different scenarios at the same time.
@@pietheijn-vo1gt There are definitely some types of models (like RNNs) that don't parallelize well because they have sequential dependencies. There's probably also some other corner cases for higher precision or other constraints, but regardless having a beefy system can be a benefit even if GPU training because the bar for data processing and task management can be pretty high, especially as models grow bigger (not to mention that CPU offloading is becoming more and more common and in those cases, more memory bandwidth/faster CPUs absolutely crushes).
@@pietheijn-vo1gt They didn't say they couldn't, and from a quick google search it seems anything less dense than large language models can be used with a CPU instead, to not much issue
@@Chipsaru Maybe if your developer is using the computer to watch RUclips ok, but saving several minutes per short render adds up very quickly. Obviously nobody's going to pay 5k for a CPU on a computer that does spreadsheets all day long.
@@eugenes9751 developers don't do rendering, we are talking code compilation, AI training, db queries, running VMs or containers, etc. This CPU will be faster in some scenarios, but not that much. If my project compiles in 5 minutes now, it will compile in 4-4.5 minutes with this CPU because bottleneck is somewhere else.
In relation to AVX-512, yes not a lot of programs explicitly use it, however the runtimes they are build on do, so things such as searching datasets, computing physics for games etc, will run faster out of the box with AVX-512. An example of this is the AVX-512 support added to .NET 8, it is integrated into lots of the core libraries.
I got my 2950X with a X399 motherboard for $1k back in the day. I might consider upping that to even $2k for an upgrade as Threadrippers are pretty awesome for my needs (tons of virtualization) but even the cheapest option w/ a motherboard is now going over $3k5 and I just can't justify it... I guess the Threadripper line is a dead-end for me, it was awesome while it lasted (before the previous generation), although I won't be changing mine anytime soon-it still does the job adequately.
I agree, i am using a 1920X for my server at home and need an upgrade. But even if there was a 12-Core Threadripper, paying more than 1000 for just a motherboard is just too much... There is nothing new on the market with 12-16 cores, lots of lanes and higher clockspeeds than 3.3Ghz
Don't think you can compare just the amount of cores tho. For example an 1 efficiency core on an intel 14900k is much stronger than 1 AMD core on the threadripper 7000. Not even speaking about the performance cores. Just the raw clock power of 6ghz vs 5.1ghz and ofc the TDP of intel being much more efficient and "easier""to cool.
@@Zarod89 what? if you said a intel performance core was faster than a zen4 core, then sure enough. but an efficiency core? intel THEMSELVES have said that these are about the performance of a skylake core, and you know what AMD's equivalent to skylake was? zen2.
No it isn't. First of all there's no such thing as an "enterprise/business chip", which is a buzzword with no definition. This just sounds like mental gymnastics because you can't afford one and only have a low end desktop processor with 16 PCIe lanes. People who aren't satisfied with toys will go Threadripper.
THERE IS COMPETITION FROM INTEL - The Xeon W-2400 lineup! So many reviews and no one even tried to compare to intel's actual fresh HEDT - W790 platform. What kind of review incompetence is this? Yes, Intel would loose, but so what, this means we can just ignore it and compare to desktop CPUs? And LTT made a video about W790 Xeon W and when fresh TR comes out you just ignore them? Mindblowing.
The lower temps from the 7980x is actually due to the much lower power per core. The cores are operating far more efficiently from less leakage. I’d say 90% of the lower temps are due to this and surface area is only 10%.
When it comes to AVX-512 you also need to mention if the chip clocks down like the Intel side of things. Intel hobbles their chips when using AVX-512 due to how much power it takes so the slow the whole chip down to a lesser clock speed in order to not blow their entire power budget (and also create a melted pile of CPU under your already toasty cooler.)
Not an issue with Zen 4. There are no AVX-512 offsets, and the dynamic clocks don't drop any lower under full load with AVX-512 than they do with any other kind of load.
@@Dygear But it's not a highlight. It's just a lack of bizarre behavior found on older Intel chips running AVX-512. A video about those chips is the proper place to discuss a loss of clock speed in that scenario.
Don't care about the core count. It's the I/O that matters. If you need expansion cards, you simply cannot use the toy computer platforms, because there's nowhere to plug them in. That's why it's still frustrating that AMD didn't release 12-core and 16-core parts on the HEDT platform, with comparably lower prices.
@@SethTaylor1 My own personal mix of expansion cards: Graphics card - 16 lanes RAID controller - 8 lanes 10G NIC - 8 lanes Sound card - 1 lane It's simply not possible for me to build a computer using a toy platform. What about onboard sound, you might ask? It sucks. What about boards with 10G onboard? It's RJ45, not SFP+. I'm not even talking about any special industry parts. Just normal things that any advanced PC user might have.
@@TrueThannyalso I think 40gbe fiber takes 16x, and if you wanted to even just use *single* GPU + 4xM.2 carrier card + 10gig NIC that’s also more than consumer platform supports.
Id love to know how these chips compare under different use cases, compiling, running VMs (EVC vs no EVC) , databases, big data map reduce stuff, ai, etc... synthetic tests, blender and video work only tell a small section of the story.
Unfortunately, that's very unlikely to happen. There are 12-core and 16-core TR Pro parts, but the former is just $100 cheaper than the 7960X, and the latter is $400 more expensive. Given the increased cost that a WRX90 motherboard will likely have, the 7960X is still safely the price entry point to getting a modern processor capable of using expansion cards. So with the $600 ASRock TRX50 board, $1500 for the 24-core Zen 4 TR chip, and $1200 for the 128GB DDR5-6400 kit, you're looking at $3300 for an upgrade, if you have all the other bits already (case, PSU, cooler, etc.).
Not really. The x399 boards were great but after that they really went downhill. There's a reason that trx40 boards are barely even selling at $100 on eBay, and X99 boards are still selling for more than that lmao.
Could you start including compile benchmarks in your Productivity tests. These are relevant not only for devs but also many engineers, so seeing how a cpu like that performs would be nice.
I wonder how big of a change this cou could bring to a big vfx studio. I always wonder how and when do they justify updating their hardware. That would be an interesting video if you ever have a chance yourself or team members to touch on
I'm a vfx artist myself, and I can certainly see the appeal, but it's not like a studio will equip all their artists with these chips, a 7950x is just infinitely more economical. HOWEVER If you're a houdini artist that handles a lot of very large simulations, I guarantee that your studio will do everything that they can to set you up with one of these chips, because the additional RAM capacity is a huge thing when you need to run incredibly large fluid sims. I'm sitting at a comfortable 128GB of RAM rn, and it's certainly got it's limits. Also if they have a dedicated CPU render farm, these will probably be eaten up for that, although more and more are using GPU render farms, and for most shots, a 6000 ADA will perform great, provided they're using a GPU render engine like octane.
I think these have their place for one big reason, they have as many cores as Epyc, but they're clocked much higher, so you get the best of both worlds.
Im so happy Threadripper is back. I was so angry when I invested in the line and they pretty much pulled out less then a year later. It'd have been hard to move onto another processor after this.
If we’re going with “LEDT”…. All I need is for LEDT to throw a few more PCIe lanes our way to fill that gap. Not being able to move my m.2 card into AM5 has me stuck on TRX40…
I'm a gamer, but videos like this remind me just how small a part of computing gaming actually is. This is just mind blowing computing power...and this is only at the HEDT level
I think some LocalLLM benchmarks would br nice to see in general and would be an arena where a product like this could have some interesting utility (even if its still not practical for those applications).
you'd be seeing mostly prosumers running some AI tasks, rendering, computationally expensive algorithms, virtualization, or maybe also needing the pcie lanes for multi-gpu (rendering/AI), or storage and probably a ton of more pro workloads, the GN video seems to have some interesting benchmakrs surrounding that
Who need non supported solutions ? Most people need intel and Nvidia computing, Autodesk etc, if one of them is AMD, you will get non supported results too. Who needs AMD, if you develop everything on AMD, they can do the same job, or better ? But who is willing to do that ? Small experimenting parties ? On what benefits ?
@lucasrem maybe try to explain that more clearly. Your comment is a bit of a brain vomit. Are you saying the AMD chips will perform worse than the Intel chips in Autodesk applications? And are you saying AMD CPUs are not supported by all software in general?
@jockey12022011 yeah, it seems to me like he's suggesting that AMD CPUs and GPUs are not supported by some software, so you should just stick to Intel and Nvidia. Yeah kind of a fanboy take when you don't support that stance with any proof, especially after seeing how Autodesk itself claims to be "hardware agnostic", it is written right on the main page lol. By the way, intel produces x86 CPUs just like AMD does, they don't have any kind of proprietary technology that would make them the "only supported" way to do something, in contrast to AMD. The only relevant "CPU proprietary stuff" I can think of is virtualization support, where both AMD and Intel offer equivalent tech. Talking about GPU stuff is a different matter, and there I can see how some of the proprietary technologies that Nvidia offers could lead someone to prefer them over AMD or Intel offerings just purely based on the brand and the associated features. But then again, for most of the stuff, the subject is more about compute and less about specific technologies: audio and video editing, professional movie or music making, 3d modelling, simulation environments, you name it, they all pretty much don't care about the hardware that you're using as long as it's offering similar features, which AMD does, so why would you argue that you just can't do shit on AMD because "they are not supported" when the reality is that you clearly can, and actually, sometimes, it is even the better option?
Ngl I kinda wish that normal TR 7000 supported uDIMM, and rDIMM was for the Pro lineup. I know this is 100% a dumb wish, but I use TR 3000 for my main desktop and I liked that it was this insanely powerful system, but can still look like a high end gaming desktop. With rDIMM (and the new motherboards for 7000) are very clearly workstations, and nothing else
I'm an IT Admin for infrastructure, this channel was one of the big reasons I wanted to do it for a living and this line of Chips, threadripper starting at ryzen gen 1 is possibly the single most important piece of hardware for this journey. I LOVE the idea of this super overkill option for consumers. If I had the money, I would run this in any and all of my computers I and my fam use privately, not because they're better for the applications or something but simply because they're so cool. I don't quite know what it is but these chips are the only ones that excite me quite this much on release. No other piece of hardware has ever excited me as much It's not that I'm an amd fanboy, I recommend Intel chips to a lot of people when asked because for most people they're just better, they're just the only ones doing hedt and since they've had ryzen and that naming scheme for these they've just been dominating
Some feedback: when highlighting the two CPUs in thermals charts the yellow highlight changes the colour so it does no longer match the chart colour. 7:46
the rub is going to be that youtube benchmarks will not include threadrippers in the normal benchmarks and the consumer version will go away again because there is no one associating these CPUs to the every day user but 4090s are somehow ok 😖
@@smittyvanjagermanjenson182average consumers cant afford to pay the license x core of that machines. Average users are not the ones who change the world.
Yup, kinda dumb. But there are indeed tasks in which the threadripper delivers about 4 times the results for about the same power consumption, which for professional use can be quite impactful. (Obviously it can have a lot higher power consumption but I am referring to "Some" uses)
Difference is you can support much more RAM and it’s got an entirely different use case. Waayyyyy more PCIE lanes too for expansion cards and faster storage. There’s a reason this exists. And it’s not to game or video edit on.
@@abdou.the.hereticit’s prosumer. Not epyc levels of versatility when it comes to running them in servers, but it has way more PCIE lanes than anything from intels HEDT and it also has support for a lot more RAM.
Genuine question out of ignorance. Why does everyone avoid comparing HEDT to Xeon w if the price and performance are comparable. Would they not both be options someone looking for a $5000 threadripper may consider?
I would suspect that Xeon motherboards for workstation are kinda rare and not obtainable in retail. In that case they need to be compared with Threadripper Pro.
Agree, major oversight. The Xeon W5-2465X and it's siblings are the real competitor to this line up, and would be nice to see how it performs head to head with Threadripper 7000 series.
These graphics are so useful into proving insight. It really drives home how the $5000 7980x completely destroys the $600 Intel 14900k. Almost 3x the performance, at only 9x the price! AMD clearly the better -sponsor- product.
Actually the M1 Ultra comparison is kinda impressive for Apple. That is a 3 year old chip that uses 1/5 the power inside a computer that costs $1000 less than the CPU only being tested and it was only ¼ the performance
@@benstacey831 The M1 Ultra score they compared to is the one on Cinebench 2024. Cinebench doesn't come with scores for M2 Ultra Looking at scores online the M2 Ultra scores around 1918 so it's quite a bit below the LEDT 7950X and 14900K
The games are not coded for this amount of cores but! What if you install a Windows VM, and run the games in the VM. This way, you would be able to define which cores can the VM use... It's at least an idea to try....
Threadripper 7000 is an absolute MONSTER(and that's even with the 7980X being heavily memory bandwidth bottlenecked). Makes low end desktop processors look like toys.
@TeamLinus - Providing a little extra context would be awesome in your high-end hardware videos. I'm still all the way back here on a 5800h / 3070 laptop, get 13k or so in R23. I'd love to see a side by side live demo of these new balls-to-the-wall consumer options versus what the average person actually has.
Definitely planning a Threadripper 7000 system for the future, just need to find a motherboard and a nice case that can support multiple 1200W PSU so I can throw in 3 or 4 5090 GPUs since it will all be for high detail 3D modeling and rendering.
My problem with these high-end multi-core CPUs is that most software and applications do not utilize the entire silicon real estate, especially at the consumer level. This is an overkill for us average Joe even if you need an amazing PC.
@@stuffstoconsider3516 It's completely fine with me, I am ok with forcibly committing cores to specific applications so they don't overlap, more cores means more apps without overlap.
@@stuffstoconsider3516 Most of the technology and innovations in these absurdly high-end monstrously powerful CPUs will eventually trickle down into the consumer end hardware. Which is genuinely a nice thing with EPYC giving us a bit to look forward to in terms of hardware innovations.
I just finished calibrating my new 1440p display to the video and oh my god, i never knew I would ever have a canadian jumpscare before. Windows' regular hdr mode turns linus into a ghost, but calibration fixed that luckily. Thank you for your services everyone at LTT
Linus, please do 3 levels of budget builds with these new Threadrippers when they are in normal rotation next year. I need suggestions and inspiration! I would love to see builds for people looking to render CG shots. Im not really gonna game on this, but I don't mind seeing those performance numbers, but mainly care about various types of render times for various types of artists.
@@MrA6060 This type of PC is for work for me. So I get to go a little all out for it since it's how I make my income for the next few years. If the render times are worth it, these prices can be justified :p
I'm looking forward to these coming down in price to reasonable levels in 5 years or so. I'm really looking forward to get my hands on 64 overclockable cores. This might actually break the 1kW mark as a daily driver
Valve: 1, 2 USB: 1, 1.1, 2, 3, 3.1 gen 2x4 10gbps AMD Ryzen: 1, 2, 3, 5, 7 (yes I know 4 and 6 exist but only on laptops) AMD Threadripper: 1, 2, 3, 7 MAKE IT MAKE SENSE
This seems like a really good cpu for CFD simulations, high core count and high frequency has me just dying to get my hands on one some day. Recently bought an old xeon server for simulations and its awesome, but it doesnt hold a candle to this. I look forward to possibly getting one in 10 years 🤣
For some reason this makes me think about how many of us have multiple computers but our systems don’t yet have the ability to share big workloads with those machines at a low level. It’s up to application developers to implement that if they want it. So much time and energy might be saved -and useful hardware life extended- if Apple, Microsoft, and Linux devs work on making this a standard component in their OS frameworks.
If your workloads are easily distributable across multiple systems (like mine are) then 4 full 7950X systems is more than $1k cheaper than a single 7980X system for the same number of cores.
I am happy about the new Threadrippers. I don't really need the cores, but I really need those PCI-e lanes. I was hoping Xeon W5 had been more of a deal, had my eyes on the W5-2465X. But after those were kind of a paperlaunch, I guess I will pick up a used 7960X bundle in like 18 month.
Threadripper is great for gaming, but it can ALSO many other things at the same time. A CPU is for anything you want it to be. Threadripper can do it all if you need the power.
For video encoding I'm surprised you didn't mention the obvious, you can run multiple encodes at once, some tools even allow you to use the scene detection to split the file up and encode each scene in parallel
See, what I find great about thread ripper is their usefulness second hand or when they are a gen or two old and prices come down. For building a video and photo editing machine, support for quad channel ecc memory and all those pcie lanes are more important to me than clock speed. It really does fit a perfect little niche in the market.
I worked in scientific computing / molecular modeling for several years. Would've loved to have a workstation with this one in it because the majority of molecular modeling software is not GPU ready yet, even though a lot of the underlying matrix math would be great for that. I was often waiting for days in the queue of our Epyc and Xeon based high performance computing cluster (yes, mixed architecture, recompiling stuff all the time was A PAIN) to start calculation jobs which would then finish within half a day or so on 32-64 cores. Could've done many of these jobs locally on a machine with a Threadripper workstation. But alas, no funding for something like that in academia, and now I've moved on to greener pastures (and away from CPU melting molecular modeling simulations).
I love watching videos on products I can't afford
sameeee
RHIGT ME TO LOL
My favourite passtime
the same thing my nephew tells me whenever he finds me watching pc builds
Its the best
For those wondering, the specs of the highest of the PRO lineup is:
2TB of ECC RAM
96 Cores / 192 Threads
128 PCIe 5.0 lanes
$9,999
Note: No, I'm not going to buy one, I can barely afford rent.
bruh
Those are intended for server use, right?
So the specs of my body after that purchase will be:
1 kidney
1 lung
1 arm
1 leg
1 heart
1 brain
My poweredge r620 tops out at 768gb ram 2tb is like X_x and the cores i wouldnt say no to seeig task manager....adjust to the cores graph 😂
@moondust6034 no, high end workstations. Epyc is the server range.
I honestly expected my 7950X to last longer before being referred to as low end....
Hah! Peasant!
*He says whilst sitting computerless*
@@ailestriker9888 Apparently we're almost in the same boat then, but I guess a low end pc is slightly better than no pc technically...
Me slowly petting my Ryzen 5 3600 :
"There there buddy.. You're high end to me"
@@i_like_Peanuts
Gently pats my i7 4790K
"Please dont off yourself because I watched this video"
@@leadwolf32pats my i910900kf
I love watching huge amounts of cores crush tough projects 😁
Meanwhile I love what Apple's smaller core-count baseline M chips can do in web dev...
In web dev we actually mostly want super fast single-core performance due to everything being largely single-threaded.
But in everything else? Yes. Threadrippers kick ass.
@@shapelessed I’m actually a huge M-series Mac fan myself!
I mean, if you’re like me and you got the full M3 Max the minute it was available, then you have almost desktop i9 performance in your lap right now. The thing is insane. Threadripper level? Not quite, but 16 cores up to 4.1 GHz with VERY wide decode per clock cycle? It punches well above its weight to say the least.
@@shapelessed what's the deal with web? Application running on web tech typically should be built so that average consumer pc user can run it with no issues. And then if you need anything more heavy multithreading is possible on web though not as efficient due to costly cross-thread communication in javascript.
A giant, expensive processor that sucks for gaming...yaaaa.
@@Epicgamer_Maclol NO its not near i9 😂
It's insane that AMD is actually bigger (money wise at the current moment) than Intel is. Lisa Su has to be one of the most successful CEO's in modern history.
helps when your executive is actually an engineer and not just a business person
Fr! The entire handheld gaming pc market, that is booming, only exists because of Ryzen APU's!
"money wise" being? I love AMD, don't get me wrong, but I cannot see how AMD is in any financial metric stronger than Intel right now.
Found the team red fanboy
Intel has a lot of business outside of CPUs, and still owns massive market shares there anyway. AMD is financially much smaller than both competitions Intel & Nvidia, something that makes wins against them impressive.
The competition is actually so disappointing rn, im glad AMD is still actually making improvements, although it would be nice to see the prices come down even just a little
To be fair they themselves killed threadripper
The prices are coming down
Until intel gets something remotely competitive then they can bank the profits. Though the reality is that they probably will make more pro and epyc chips than non pro which bumps the non pro price up.
@@bionicgeekgrrlits how the market works, what do you expect, even Nvidia is doing it because they know there is no competition
@@bionicgeekgrrl the only reason why intel is competitive at all is because they are trusted by the people who do not care to use the brain tbh
Can't wait for another Linus' personal rig upgrade, back to HEDT
His probably going to say is not going to use it, then one year later we learn in wan-show that he's been using it for 6 month just because "no one was using it". Yeah sure linus, your editing team "couldn't" use it right.
Nice Profile pic
@@korosaki13lmao my guy you're getting mad at him for an imaginary scenario you made in your mind
@@FinneasJedidiah Seriously, dude needs to touch grass.
@@korosaki13they can't... Their boards aren't compatible....
These graphs are a lot easier to understand at a glance than the old ones. Good improvement!
Except for the thermal draw graphs, where the highlighted box over the 7980X and 7970X in the legend changes the color of their labels!
@@totallynotmyeggalt6216the thermal graph not being super readable doesn't bother me too much because the average person really doesn't care.
When i knew nothing about computers 7 years ago, your videos were enjoying. Then i got my degree in IT, and you videos were still enjoying. Im currently in my second year of computer science and quess what. I still find your videos wildly enjoying! Achieving such depth, while still keeping the consepts easy to understand, is crazy. Great work!
FYI it's 'enjoyable'.
I'm thoroughly impressed with AMD's threading prowess here, really ripping those threads! Love how they're still pushing the high-end even in this class of CPUs. Besides, as you said, time equals money, so those extra threads could be a lifesaver. And being an AMD fan, these prices still feel like a justifiable investment. Can't wait to see how the landscape changes when Intel brings their mojo back.
As an AI developer, and sometimes training can take days or even weeks, something powerful like this would be amazing. Not all AI is trainable on gpus.
Which type of AI can't be trained on GPUs?
That's the first thing that came to my mind... Even if you're training on GPU's, the CPU still needs to do a significant amount of work. Especially if you have say... A server full of GPU's all running different scenarios at the same time.
@@pietheijn-vo1gt There are definitely some types of models (like RNNs) that don't parallelize well because they have sequential dependencies. There's probably also some other corner cases for higher precision or other constraints, but regardless having a beefy system can be a benefit even if GPU training because the bar for data processing and task management can be pretty high, especially as models grow bigger (not to mention that CPU offloading is becoming more and more common and in those cases, more memory bandwidth/faster CPUs absolutely crushes).
@@pietheijn-vo1gt They didn't say they couldn't, and from a quick google search it seems anything less dense than large language models can be used with a CPU instead, to not much issue
Yes they did?
"Not all AI is trainable on gpus."@@DanielFerreira-ez8qd
A $5000 CPU only costs about 80hrs of developer time savings, so it's surprisingly quick to ROI.
in comparison to some high end consumer cpus developer would "save" 1h / month at most upgrading to this, so ~7 years ROI is way too much.
@@Chipsaru Maybe if your developer is using the computer to watch RUclips ok, but saving several minutes per short render adds up very quickly. Obviously nobody's going to pay 5k for a CPU on a computer that does spreadsheets all day long.
@@eugenes9751 developers don't do rendering, we are talking code compilation, AI training, db queries, running VMs
or containers, etc. This CPU will be faster in some scenarios, but not that much. If my project compiles in 5 minutes now, it will compile in 4-4.5 minutes with this CPU because bottleneck is somewhere else.
If you are building large C++ code bases, a Thread Ripper will pay for itself very quickly.
In relation to AVX-512, yes not a lot of programs explicitly use it, however the runtimes they are build on do, so things such as searching datasets, computing physics for games etc, will run faster out of the box with AVX-512. An example of this is the AVX-512 support added to .NET 8, it is integrated into lots of the core libraries.
Can't wait for Linus to invest into the new Threadripper platform for the office. Only for it to be discontinued a year later.
If they're going to upgrade their workstations, this is really their only option right now, unless they go with the professional thread ripper
@@the_undead If they do. They made the decision to go to LGA 1700.
I got my 2950X with a X399 motherboard for $1k back in the day. I might consider upping that to even $2k for an upgrade as Threadrippers are pretty awesome for my needs (tons of virtualization) but even the cheapest option w/ a motherboard is now going over $3k5 and I just can't justify it... I guess the Threadripper line is a dead-end for me, it was awesome while it lasted (before the previous generation), although I won't be changing mine anytime soon-it still does the job adequately.
I agree, i am using a 1920X for my server at home and need an upgrade. But even if there was a 12-Core Threadripper, paying more than 1000 for just a motherboard is just too much...
There is nothing new on the market with 12-16 cores, lots of lanes and higher clockspeeds than 3.3Ghz
@@maximkovac2000Pro series starts at 12 cores (7945WX), but pricing is yikes
Spend the money
@@maximkovac2000spend the money you be happy you did
@@chriswright8074consoomer mindset
AMD's 64 cores consume just as much power as Intel's 8 (+16 "efficiency") cores. Damn.
Don't think you can compare just the amount of cores tho. For example an 1 efficiency core on an intel 14900k is much stronger than 1 AMD core on the threadripper 7000. Not even speaking about the performance cores. Just the raw clock power of 6ghz vs 5.1ghz and ofc the TDP of intel being much more efficient and "easier""to cool.
@@Zarod89 You seem to confuse a lot.
''intel being much more efficient''you have no clue buddy @@Zarod89
@@Zarod89 what? if you said a intel performance core was faster than a zen4 core, then sure enough. but an efficiency core? intel THEMSELVES have said that these are about the performance of a skylake core, and you know what AMD's equivalent to skylake was? zen2.
"eye-watering $5000"
Tim Apple: ...and it comes in black.
Can't wait to see the userbenchmark review of this.
Let's be honest, this isn't really a desktop cpu, more of a low end enterprise/business chip.
*Low End Enterprise Threadripper*
LEET for short
No it isn't. First of all there's no such thing as an "enterprise/business chip", which is a buzzword with no definition. This just sounds like mental gymnastics because you can't afford one and only have a low end desktop processor with 16 PCIe lanes. People who aren't satisfied with toys will go Threadripper.
@chiefjudge8456 the mental gymnastics to convince me to spend 5 grand on a cpu for my personal desktop doesn't exist in any conceivable reality.
@@chiefjudge8456 lol mental gymnastics?
Missed the whole LEET joke part aye?
THERE IS COMPETITION FROM INTEL - The Xeon W-2400 lineup! So many reviews and no one even tried to compare to intel's actual fresh HEDT - W790 platform. What kind of review incompetence is this? Yes, Intel would loose, but so what, this means we can just ignore it and compare to desktop CPUs? And LTT made a video about W790 Xeon W and when fresh TR comes out you just ignore them? Mindblowing.
Make a thread about it on their forum much more likely to get more traction
+
+
Right after compensator 4, now that has to be redone.
The lower temps from the 7980x is actually due to the much lower power per core. The cores are operating far more efficiently from less leakage. I’d say 90% of the lower temps are due to this and surface area is only 10%.
I love my 3970x! Nice to see AMD back with more god chips.
Stop flexing on us poor peasants
@@caxlyofficial stop flexing with beaing a poor peasant.
I love my 9970x too from AMD and my RTX 8090ti which has over 46gb of vram.
@@skyecloud968 Only 46gb of VRAM? Posh! I expected at least 10 48 GB quadros from you.
@@caxlyofficial Lmao, not flexing. I use it for my business and its been a good product to me.
When it comes to AVX-512 you also need to mention if the chip clocks down like the Intel side of things. Intel hobbles their chips when using AVX-512 due to how much power it takes so the slow the whole chip down to a lesser clock speed in order to not blow their entire power budget (and also create a melted pile of CPU under your already toasty cooler.)
Not an issue with Zen 4. There are no AVX-512 offsets, and the dynamic clocks don't drop any lower under full load with AVX-512 than they do with any other kind of load.
@@TrueThanny yes but it should be in the video as a highlight of the AMD chip.
@@DygearBut the chips don't clock down?
@@Dygear But it's not a highlight. It's just a lack of bizarre behavior found on older Intel chips running AVX-512. A video about those chips is the proper place to discuss a loss of clock speed in that scenario.
@@TrueThanny considering that was an expected behavior for the intel chips the AMD chips not having that behavior is a highlight.
Don't care about the core count. It's the I/O that matters. If you need expansion cards, you simply cannot use the toy computer platforms, because there's nowhere to plug them in.
That's why it's still frustrating that AMD didn't release 12-core and 16-core parts on the HEDT platform, with comparably lower prices.
what industry / io are you specifically referring to?
@@SethTaylor1 My own personal mix of expansion cards:
Graphics card - 16 lanes
RAID controller - 8 lanes
10G NIC - 8 lanes
Sound card - 1 lane
It's simply not possible for me to build a computer using a toy platform. What about onboard sound, you might ask? It sucks. What about boards with 10G onboard? It's RJ45, not SFP+.
I'm not even talking about any special industry parts. Just normal things that any advanced PC user might have.
@@TrueThannyalso I think 40gbe fiber takes 16x, and if you wanted to even just use *single* GPU + 4xM.2 carrier card + 10gig NIC that’s also more than consumer platform supports.
Id love to know how these chips compare under different use cases, compiling, running VMs (EVC vs no EVC) , databases, big data map reduce stuff, ai, etc... synthetic tests, blender and video work only tell a small section of the story.
Phoronix has a more versatile test coverage up, just FYI.
Hope to see a new 16 core threadripper that might be a bit more economical. Could really use the PCIe lanes in my server.
Unfortunately, that's very unlikely to happen.
There are 12-core and 16-core TR Pro parts, but the former is just $100 cheaper than the 7960X, and the latter is $400 more expensive.
Given the increased cost that a WRX90 motherboard will likely have, the 7960X is still safely the price entry point to getting a modern processor capable of using expansion cards.
So with the $600 ASRock TRX50 board, $1500 for the 24-core Zen 4 TR chip, and $1200 for the 128GB DDR5-6400 kit, you're looking at $3300 for an upgrade, if you have all the other bits already (case, PSU, cooler, etc.).
7000 AMD Threadripper comes out :
Vegeta : Are you ready now? To witness a power not seen for thousands of years!?
Not that it's the most important thing, but the mobo design for threadripper has always been 🔥🔥🔥
I agree, but they were also all overpriced. And with the last gen they dead-ended the chipset and socket after zero upgrades. Fool me once...
Not really. The x399 boards were great but after that they really went downhill. There's a reason that trx40 boards are barely even selling at $100 on eBay, and X99 boards are still selling for more than that lmao.
Can you test this by playing on a 300K plus population Cities Skylines 2 save?
Need to know if this is a decent upgrade to play CS2.
Thanks.
1000 IQ move: Buying an old Intel 7980X and making everybody think you are rich😂
I would love to see some C++ compilation speed, mist be a beast.
Could you start including compile benchmarks in your Productivity tests. These are relevant not only for devs but also many engineers, so seeing how a cpu like that performs would be nice.
I wonder how big of a change this cou could bring to a big vfx studio. I always wonder how and when do they justify updating their hardware.
That would be an interesting video if you ever have a chance yourself or team members to touch on
I'm a vfx artist myself, and I can certainly see the appeal, but it's not like a studio will equip all their artists with these chips, a 7950x is just infinitely more economical.
HOWEVER
If you're a houdini artist that handles a lot of very large simulations, I guarantee that your studio will do everything that they can to set you up with one of these chips, because the additional RAM capacity is a huge thing when you need to run incredibly large fluid sims. I'm sitting at a comfortable 128GB of RAM rn, and it's certainly got it's limits.
Also if they have a dedicated CPU render farm, these will probably be eaten up for that, although more and more are using GPU render farms, and for most shots, a 6000 ADA will perform great, provided they're using a GPU render engine like octane.
It's wild that AMD said it will pull 830watts if you can cool it.
the 7995WX is nuts, there are already some videos out about it, and some overclocker managed to Pull 1600W on it.
I think these have their place for one big reason, they have as many cores as Epyc, but they're clocked much higher, so you get the best of both worlds.
Im so happy Threadripper is back. I was so angry when I invested in the line and they pretty much pulled out less then a year later. It'd have been hard to move onto another processor after this.
What do you use yours for?
@@FleaOnMyWiener Houdini Renders
If we’re going with “LEDT”…. All I need is for LEDT to throw a few more PCIe lanes our way to fill that gap. Not being able to move my m.2 card into AM5 has me stuck on TRX40…
I'm a gamer, but videos like this remind me just how small a part of computing gaming actually is. This is just mind blowing computing power...and this is only at the HEDT level
Rip and tear until it's done I guess.
Also, HOLY SHIT the 7980X alone costs as much as my car, even before factoring motherboard prices.
Just wait for like 5 years and get it for nothing lol
I think some LocalLLM benchmarks would br nice to see in general and would be an arena where a product like this could have some interesting utility (even if its still not practical for those applications).
These should be highly impressive for home lab users. Imagine all the high performance VMs you can make with these cores.
I love the Threadripper series of CPUs, but I have no idea why anyone would find a way to legitimize purchasing one if their primary use gaming.
They are terrible for gaming. They aren't made for it and games aren't optimized for them. I learned the hard way. 😢😢😢
can you do some interviews with customers of these higher end products to see what they're actually used for?
you'd be seeing mostly prosumers running some AI tasks, rendering, computationally expensive algorithms, virtualization, or maybe also needing the pcie lanes for multi-gpu (rendering/AI), or storage and probably a ton of more pro workloads, the GN video seems to have some interesting benchmakrs surrounding that
Who need non supported solutions ?
Most people need intel and Nvidia computing, Autodesk etc, if one of them is AMD, you will get non supported results too.
Who needs AMD, if you develop everything on AMD, they can do the same job, or better ? But who is willing to do that ?
Small experimenting parties ? On what benefits ?
I know that Threadripper was used in rendering Terminator movie effects.
@lucasrem maybe try to explain that more clearly. Your comment is a bit of a brain vomit. Are you saying the AMD chips will perform worse than the Intel chips in Autodesk applications?
And are you saying AMD CPUs are not supported by all software in general?
@jockey12022011 yeah, it seems to me like he's suggesting that AMD CPUs and GPUs are not supported by some software, so you should just stick to Intel and Nvidia.
Yeah kind of a fanboy take when you don't support that stance with any proof, especially after seeing how Autodesk itself claims to be "hardware agnostic", it is written right on the main page lol.
By the way, intel produces x86 CPUs just like AMD does, they don't have any kind of proprietary technology that would make them the "only supported" way to do something, in contrast to AMD.
The only relevant "CPU proprietary stuff" I can think of is virtualization support, where both AMD and Intel offer equivalent tech.
Talking about GPU stuff is a different matter, and there I can see how some of the proprietary technologies that Nvidia offers could lead someone to prefer them over AMD or Intel offerings just purely based on the brand and the associated features.
But then again, for most of the stuff, the subject is more about compute and less about specific technologies: audio and video editing, professional movie or music making, 3d modelling, simulation environments, you name it, they all pretty much don't care about the hardware that you're using as long as it's offering similar features, which AMD does, so why would you argue that you just can't do shit on AMD because "they are not supported" when the reality is that you clearly can, and actually, sometimes, it is even the better option?
Wake up Intel!
Ngl I kinda wish that normal TR 7000 supported uDIMM, and rDIMM was for the Pro lineup. I know this is 100% a dumb wish, but I use TR 3000 for my main desktop and I liked that it was this insanely powerful system, but can still look like a high end gaming desktop. With rDIMM (and the new motherboards for 7000) are very clearly workstations, and nothing else
I'm an IT Admin for infrastructure, this channel was one of the big reasons I wanted to do it for a living and this line of Chips, threadripper starting at ryzen gen 1 is possibly the single most important piece of hardware for this journey. I LOVE the idea of this super overkill option for consumers. If I had the money, I would run this in any and all of my computers I and my fam use privately, not because they're better for the applications or something but simply because they're so cool. I don't quite know what it is but these chips are the only ones that excite me quite this much on release. No other piece of hardware has ever excited me as much
It's not that I'm an amd fanboy, I recommend Intel chips to a lot of people when asked because for most people they're just better, they're just the only ones doing hedt and since they've had ryzen and that naming scheme for these they've just been dominating
well a 13th i5 will outperform the treadripper for gaming, most applications people use. So yea a waste of money. lol
@@gregcrouse8833 I don't disagree with this, I understand that it's entirely a waste for a lot of people, it's still cool as f
ahh yes, technology that i cannot afford.
This video needs 3 more sponsor ads.
Some feedback: when highlighting the two CPUs in thermals charts the yellow highlight changes the colour so it does no longer match the chart colour. 7:46
Linus needs a cinebench sponsor. It's all he relies on.
the rub is going to be that youtube benchmarks will not include threadrippers in the normal benchmarks and the consumer version will go away again because there is no one associating these CPUs to the every day user but 4090s are somehow ok 😖
They will and they won't. Not everyone is satisfied with low end desktop chips.
RTX 4090 isn't what any average consumers need smh.. that's all just overhype from Gamers.
@@smittyvanjagermanjenson182average consumers cant afford to pay the license x core of that machines.
Average users are not the ones who change the world.
Twice as fast as the 14900K, 10 times as expensive.
5k for anything consumer grade is theoretical levels of idiocracy
Yup, kinda dumb. But there are indeed tasks in which the threadripper delivers about 4 times the results for about the same power consumption, which for professional use can be quite impactful.
(Obviously it can have a lot higher power consumption but I am referring to "Some" uses)
@@abdou.the.heretic same with 4090
Difference is you can support much more RAM and it’s got an entirely different use case. Waayyyyy more PCIE lanes too for expansion cards and faster storage.
There’s a reason this exists. And it’s not to game or video edit on.
@@abdou.the.hereticit’s prosumer. Not epyc levels of versatility when it comes to running them in servers, but it has way more PCIE lanes than anything from intels HEDT and it also has support for a lot more RAM.
Next AMD ultimate upgrade video is just a Threadripper chip sitting on a shelf.
Best introduction ever Linus. You took it to the next level there, "Makes Intel's fastest consumer chip look like a drugged donkey!" WOW.
Genuine question out of ignorance. Why does everyone avoid comparing HEDT to Xeon w if the price and performance are comparable. Would they not both be options someone looking for a $5000 threadripper may consider?
I would suspect that Xeon motherboards for workstation are kinda rare and not obtainable in retail. In that case they need to be compared with Threadripper Pro.
Agree, major oversight. The Xeon W5-2465X and it's siblings are the real competitor to this line up, and would be nice to see how it performs head to head with Threadripper 7000 series.
AMD is back but chip looks HUGE!
Day is always better when an LTT video comes out!
These graphics are so useful into proving insight. It really drives home how the $5000 7980x completely destroys the $600 Intel 14900k. Almost 3x the performance, at only 9x the price! AMD clearly the better -sponsor- product.
Actually the M1 Ultra comparison is kinda impressive for Apple. That is a 3 year old chip that uses 1/5 the power inside a computer that costs $1000 less than the CPU only being tested and it was only ¼ the performance
kinda odd to not use the m2 ultra as comparison
March 2022 was 3 years ago? Though I do get your point that the M1 Ultra is based on the same arch as the regular M1 from November 2020
@@benstacey831 The M1 Ultra score they compared to is the one on Cinebench 2024. Cinebench doesn't come with scores for M2 Ultra
Looking at scores online the M2 Ultra scores around 1918 so it's quite a bit below the LEDT 7950X and 14900K
@@benstacey831 cinebench 2024 might not have reliable data for it
Impressive for you but apple silicon cant do what this cpu can do.
The games are not coded for this amount of cores but! What if you install a Windows VM, and run the games in the VM. This way, you would be able to define which cores can the VM use... It's at least an idea to try....
To explain why this wouldn't help would take too much effort, but it wouldn't.
Threadripper 7000 is an absolute MONSTER(and that's even with the 7980X being heavily memory bandwidth bottlenecked). Makes low end desktop processors look like toys.
@TeamLinus - Providing a little extra context would be awesome in your high-end hardware videos. I'm still all the way back here on a 5800h / 3070 laptop, get 13k or so in R23. I'd love to see a side by side live demo of these new balls-to-the-wall consumer options versus what the average person actually has.
Nice vid Linus! It’s funny to see you test things that are twice my computer’s price.😂😂
Definitely planning a Threadripper 7000 system for the future, just need to find a motherboard and a nice case that can support multiple 1200W PSU so I can throw in 3 or 4 5090 GPUs since it will all be for high detail 3D modeling and rendering.
My problem with these high-end multi-core CPUs is that most software and applications do not utilize the entire silicon real estate, especially at the consumer level. This is an overkill for us average Joe even if you need an amazing PC.
Dang, that's a lot of computing power :D Its great that such CPU hits desktop market.
@@stuffstoconsider3516that's because this product isn't FOR you, so I don't understand why you have a "problem" with it?
@@stuffstoconsider3516 It's completely fine with me, I am ok with forcibly committing cores to specific applications so they don't overlap, more cores means more apps without overlap.
@@stuffstoconsider3516 Most of the technology and innovations in these absurdly high-end monstrously powerful CPUs will eventually trickle down into the consumer end hardware. Which is genuinely a nice thing with EPYC giving us a bit to look forward to in terms of hardware innovations.
I just finished calibrating my new 1440p display to the video and oh my god, i never knew I would ever have a canadian jumpscare before. Windows' regular hdr mode turns linus into a ghost, but calibration fixed that luckily. Thank you for your services everyone at LTT
Now this needs an ITX board!
Linus, please do 3 levels of budget builds with these new Threadrippers when they are in normal rotation next year. I need suggestions and inspiration! I would love to see builds for people looking to render CG shots. Im not really gonna game on this, but I don't mind seeing those performance numbers, but mainly care about various types of render times for various types of artists.
ah yes the 3 levels of budget being 10k, 20k and 30k
@@MrA6060 This type of PC is for work for me. So I get to go a little all out for it since it's how I make my income for the next few years. If the render times are worth it, these prices can be justified :p
I'm looking forward to these coming down in price to reasonable levels in 5 years or so. I'm really looking forward to get my hands on 64 overclockable cores. This might actually break the 1kW mark as a daily driver
Valve: 1, 2
USB: 1, 1.1, 2, 3, 3.1 gen 2x4 10gbps
AMD Ryzen: 1, 2, 3, 5, 7 (yes I know 4 and 6 exist but only on laptops)
AMD Threadripper: 1, 2, 3, 7
MAKE IT MAKE SENSE
We need a dual (or quad) 7990x pro threadripper whatever and some 4090 sli to do a true compensater build
This is my daily reminder that Im never going to afford a pc.
linus never fails to spread my cheeks and fill me up with happiness
@@Dessert_And_Tea Oh, happiness. I thought you were talking about something else
FIRST I HOPE
2 sec off
@@APerson-14 damn
first to be cringe maybe
yep i accidentally had caps on lol @@keir92
MRDT: Mid-range Desktop
This seems like a really good cpu for CFD simulations, high core count and high frequency has me just dying to get my hands on one some day. Recently bought an old xeon server for simulations and its awesome, but it doesnt hold a candle to this. I look forward to possibly getting one in 10 years 🤣
For some reason this makes me think about how many of us have multiple computers but our systems don’t yet have the ability to share big workloads with those machines at a low level. It’s up to application developers to implement that if they want it. So much time and energy might be saved -and useful hardware life extended- if Apple, Microsoft, and Linux devs work on making this a standard component in their OS frameworks.
This is a greeat example of your new release when ready approach - you didnt rush to do a day one and the released video is full of info - great
The script writer had fun with those star wars references
It’s 20% faster in Linux. It was a mistake to use windows in the video.
THREEAAADDDRIIPPPEERRRRR!!! I'm still really hoping they actually keep up support on this one, after what they did to us before
I love how easy this easy segue was. Nice
If your workloads are easily distributable across multiple systems (like mine are) then 4 full 7950X systems is more than $1k cheaper than a single 7980X system for the same number of cores.
I am happy they returned, as a dual GPU and double VM running person, these HEDT are just the best thing.
I honestly wish I had a job that could actually make use of this insane thing. I just want a reason to need to buy one.
Reunited and it feels so good
Scotty Kilmer: "The threadripper is back and im mad as hell"
Gotta swap that 94 Celica ECU CPU with threadripper
@@MaxKatzin "I've seen these things with a million crypto hours on them and they still run like a clock"
@@MasterGoku21 But this traverse is done after just 50,000!
Sometimes I just be starting a sentence
I am happy about the new Threadrippers. I don't really need the cores, but I really need those PCI-e lanes. I was hoping Xeon W5 had been more of a deal, had my eyes on the W5-2465X. But after those were kind of a paperlaunch, I guess I will pick up a used 7960X bundle in like 18 month.
Although it's not for gaming, i would like to see what the CPU usage is like on a City Skyline 2 map with 200k+ residents.
Threadripper is great for gaming, but it can ALSO many other things at the same time. A CPU is for anything you want it to be. Threadripper can do it all if you need the power.
For video encoding I'm surprised you didn't mention the obvious, you can run multiple encodes at once, some tools even allow you to use the scene detection to split the file up and encode each scene in parallel
Man, I don't need the cores, but the 24 pcie lanes we get on the mainstream desktop are just not enough!
See, what I find great about thread ripper is their usefulness second hand or when they are a gen or two old and prices come down. For building a video and photo editing machine, support for quad channel ecc memory and all those pcie lanes are more important to me than clock speed. It really does fit a perfect little niche in the market.
Jumped on the video the minute it showed up! Ready for the segwaaay
Add an active heat-pump chiller to your water-cooling setup.
The amount of 420 jokes LTT have been making has me thinking Linus is finally joining team green 🍃
I worked in scientific computing / molecular modeling for several years. Would've loved to have a workstation with this one in it because the majority of molecular modeling software is not GPU ready yet, even though a lot of the underlying matrix math would be great for that. I was often waiting for days in the queue of our Epyc and Xeon based high performance computing cluster (yes, mixed architecture, recompiling stuff all the time was A PAIN) to start calculation jobs which would then finish within half a day or so on 32-64 cores. Could've done many of these jobs locally on a machine with a Threadripper workstation. But alas, no funding for something like that in academia, and now I've moved on to greener pastures (and away from CPU melting molecular modeling simulations).
What software were you using to simulate and what OS Linux?
Jay had nice improvements switching to a custom loop!
They sure know how to overclock the prices
Video quailty has got amazing 👏