There are 2 minor mistakes in this video: 1. 7:51 the single core score number of 5965wx should be 1365 not 2231. The percentages are still correct though. 2. 12:49 Davinci Resolve benchmark there are a few '-%' that should be RED instead of green. Thanks for pointing them out guys! 🙏😇
And at 2:53 where Zen3 and Zen4 are listed as Process Nodes but they're actually just architectural codenames. The process nodes are TSMC 7nm and TSMC 5nm.
I love my 13900K, work primarily in Cinema 4D and Photoshop (and some Davinci here and there), and its the best money I've spent in recent years. One thing often overlooked, is the power consumption during medium and light loads, because sure when I'm running sick renders for hours it does eat a fair bit of power, but in my overall workflow I don't record more than 5-10% of high load. Most of the efforts spent during modelling, painting (I use Photoshop as a design tool more than a photo editing tool), and creating various assets, the 13900K sips power. The e-cores really are doing an amazing job, under normal load it uses the same power as my old cpu did during idle, which is more or less the same as the current day Ryzen chips. In my experience it's only when you push past 100-150w that the scale starts turning around, but overall I save so much power during the 90% of my workload that isn't pushing the cpu to its high capacity. I regard it more as a hyper efficient cpu for common workload, that then calls in the 24 core cavalry when important battles have to be fought :D
@@danielvipin7163 Win10, but afaik the differences are more or less nonexistent at this point. It was only the first month or so that there were big differences in the hybrid core utilization
@@Filmmaker809 32gb. It's low for my liking but waiting for 48gig sticks to come down in price and jump to 96 (mostly 3D work benefits here for my tasks)
For my 13900k and 14900k I have done the following: MCE off, PL1 and PL2 limit to 225, limit P-core boost to 5.5 GHz and E-core boost to 4.3GHz, and use balanced power profile in Windows (although I do disable core parking to keep system highly responsive). Oh and just XMP on the RAM. I didn’t change LLC value. I have set voltage offset at a modest -0.015v and set the Core limit to 300 Amps. I have disabled the C6&C7 C states and EIST. Lastly I have locked AVX at 0 offset. I have tested on P95, CB R23 and CB R15. All great and in a mid 20 degree room, no workload exceeds 80c on package or cores. Very happy and benchmarks are very close to where they were before taming these beasts.
If I can point 1 thing that so far nobody said is that Threadripper, like Xeon, are built for stability, to stay operative 24/7 so if you're a creator that use Adobe package go to I9/R9 while if you working on 3D packages and you rendering (Maya, 3Ds Max), simulating (Houdini), game designing/virtual production (Unreal Engine/Omniverse) you go to Tr/Xeon with ECC RAM because you dont want your software crash or your pc bluescreen (yes, I'm having this problems with my R9 3950X). Great videos tho!
@@normal_human I don't think I understand your comment, I'm using Photoshop and Premiere Pro sometimes and I don't have stability issues with those softwares. Where did I lose my sleep exactly?
I couldn't handle my system issues anymore during heavy After Effects workloads, which is every single project. So I jumped on the 5965wx from Puget. Probably gonna pair it with 2 4090s for a multi monitor setup. But I badly need a stable system with better storage and pcie capabilities. Thanks for the review!
Too late, just finished building my i9-13900K system but nice to know. Great videa as always! One of your other videos helped refresh my mind on computer builds. Price per performance for most applications ... i9-13900K wins out if you dont mind hot and power hunger. I wish you would have measured the i9 at stock limited power of 253W since that is the better way to handle that chip. Plus actions can be handled by the GPU even in Vray. So I would save my money on the cpu and upgrade the gpu.
I have the 7950X3D. I stream at twitch on 1080p placebo OBS setting when 4K ultra gaming with RTX 4090 at the same time. İt will run better with new next gen Nvidia flagship cards.
Being on a newer platform and the fact that the 13900k wins in nearly every time-sensitive application - video playback, DAW work with typical mixes, etc - means you made a good choice. The 13900k doesn't really get that hot unless you're blasting it in cinebench as long as you have a good cooler. I have a 13700k overclocked to 5.5GHz and don't go above 110W with it while gaming, streaming, youtube video, discord and so forth running at the same time. So it sits at 50-55C under load. The TDP is not in any way representative of how much the 13th gen Intel chips actually use in typical workloads.
@@CyberneticArgumentCreatorccording to TECH YES CITY the 13900k and S series is the worse latency in workstation. He downgrade to 10gen because of that issues.
1) AM5 CPUs actually have 28 PCIe Lanes, with 4 connecting to the Chipset and 16+4+4 General Purpose (or 8+8+4+4 w/Bifurcation). You are still only guaranteed 16+4 Lanes as the other x4 may be used for onboard high speed peripheral connections, although MAY be left vacant for a 2nd CPU-Direct connected NVME. Depends on Motherboard/Manufacturer. 2) Intel LGA1700 CPUs have the equivalent of 28 lanes with 8x DMI lanes from CPU to Chipset and 16+4 General Purpose (or 8+8+4 w/Bifurcation). Intel chipsets also support far more high speed Lanes with a mix of Gen4/3 on Z690 and Gen 5/4 on Z790. For Workstation, Choose AMD. For high End Consumer Desktop and PCIe Connectivity/Performance, Choose Intel. For 2nd Tier (B650 AMD, B660 Intel), Choose AMD. If you don't care about PCIe Lanes, choose whichever you like.
The TR Pros have varying amounts of L3 cache: 12-16 core 64MB, 24-32c 128MB, 64c 256MB. It's a shame there's no lower core count 256MB versions, but they probably wouldn't be any cheaper considering what they charge for the frequency-optimized EPYC 7003 series. I'm gonna try the 7950X3D, 7900XTX, 100GbE Chelsio NIC, and two PCIe 4.0 Optanes. The CPU, RAM, and motherboard are about ⅓ the price of the 5965wx platform though though it's also ⅓ the memory bandwidth, provided 4x32GB sticks of ECC runs at 4800Mbps. Kingston has some Hynix M-die ECC UDIMMs that are QVL for 4800 but they're on backorder, so I'll try the Crucial/Micron A-die for now, and we'll see what the recent Agesa update brings
I have the 7950X3D. I stream at twitch on 1080p placebo OBS setting when 4K ultra gaming with RTX 4090 at the same time. İt will run better with new next gen Nvidia flagship cards.
For render, if you are rendering on GPU, have a max of 2 GPUs, the 13900k or the 7950x with good chipsets are better options because in modelling the single core performance is more important, and it is the part that consumes most of my day. I would go for Threadripper only if I need more GPUs.
Thats exactly what I am doing with two suprim hybrids and i9-13900K. best performance per $$$ ... and no OC needed with that setup! Blender gooseberry with only one card 15 seconds per frame.
Threadripper is a class above and cost of the Workstation build is way higher (like the cost of a new car). don't get Threadrippor unless you really have the technical expertise to know why. I am very happy with my i9 13900k with twin NVlinked 3090's (that i was lucky to get at a fair price from ebay). it is a powerhouse, not cheap, but not the price of a new car either. i have not even gotten around to using all the performance boosting features and have not had any reason to do so.
On premise server, 3d model, fluid dynamic simulation, machine learning. That's what these threadrippers are for, not a Photoshop box 😤 At my roommate's job, his department share a single server with threadripper 3000, 32 core, 4x Radeon w6800 pro. I don't understand well what his job really is but it's sth to do with steel structure in construction. Intel 13900k with rtx4090 don't even have a chance with that old machine.
you mention the memory bandwidth comparison but didn't show how effective it is in this comparison if you were to add the extra sticks on the threadrippers so not sure it is a fair comparison if you are limiting the threadripper .... adding the extra sticks on the other non-TRs will limit their performance because of their mem-controller but not utilizing the extra performance you can get on a TRs just leaves me thinking that I'm not getting the full story.
Yeah it doesn't really make sense. For reference my Epyc 7302 gained 800 points in Cinebench R23 (running in a VM with 14 cores allocated) just by populating all 8 ram slots.
@thetechnotice non of the benchmark tools used in this video will show the power of the 8 channel memory on threadripper. You need to use computational fluid dynamics cfd bench tools and a really large cfd file and only then you will see how much of a great monster the threadripper cpu really is
@@besssam It doesn't really matter, that you make up your own meaning of the phrase "content creator". The video is not for you. If you need more focused tests towards engineering and scientific calculations, go to Level1techs RUclips channel. He does plenty of testing with Threadripper, Epyc, Xeon and alike testing, for your needs. This channel is more geared towards content crrators. Just as Gamers Nexus is geared towards gaming.
One glaring mistake is that you did not include the direct competitor to Threadripper Pro which is Xeon W. Let's see you do a Threadripper Pro vs Xeon W (and possibly vs Epyc) test. No faffing around - let's have 512 GB RAM, multiple NVME drives, 10+ Gb ethernet, and so on.
I think the mistake was some negative indicators were actually in green even though they should have been in red. This was in the segment at minute 13.
7:59 - I guess one mistake here is the Geekbench scores? The single core scores that you displayed for the 5965wx, 7950x and 13900K are all the same at 2231, yet the difference that you highlighted is different for each of them. Then the Multi scores for the 5965WX vs the 13900K... You scored the same at 25469 for both CPUs, but highlighted a 1.90% difference for the 13900K, I'm assuming you pasted wrong score for the 13900K.
12:43 - Hehe score coloring in the Pugetbench for Davinci Resolve, namely the 4K and 8K media tests. 13900K and 7950X numbers should be red, not green.
(Not an expert) (Not an Intel employee/fanboy/shill) Interesting how no mention of Intel's CET in its 12th/13th gen chips. That feature alone, gives Intel the lead, considerably in my opinion. To my knowledge AMD doesn't really have anything like that. Unless someone can link to documentation stating otherwise. All of the tech channels seem to overlook it, and I think it's something to consider. If you are not playing AAA games often and do not plan on streaming than you really don't need alot. I think even if you do, Intel is still the best way to go.
Since that's a security feature, and these channels only cover the numbers "cores, count, speed, fps, etc etc etc". I agree though and you make a valid point. I think Intel's CET in their newer processors (12/13 gens) are really worth considering when buying a processor today in 2023.
Or two of the three… ego should never be a factor. Or a self reflection and opposing factor in enforcing one of the other two… logic for the win hopefully haha
I didn't see the mistake (or plant?)... I was working and listening to you at the same time and occasionally looked at the video. Great video, as usual, by the way.
you would only want the 5965WX for the pcie lanes or ecc memory...otherwise a waste of money. For productivity you either go with the best of the consumer chips for best price to performance or you just go with the 64 core if you need raw power. all the threadrippers in between are pretty much irrelevant if you ask me. i mean sure for someone it might hit the sweetspot, but if you use 32 or 48 cores, chances are pretty high you could also put 64 cores to use.
I have three ASUS quad cards and one Highpoint quad card. They need 4 PCIe sockets with 16 cpu lanes each. OCTA Channel ram 3200C14 has four times the throughput of 3200C14 dual channel ram on a PC. Old software written for 4 core Intel wants faster clock speed, not a higher core count.
What is the Best Value Case, CPU, and Cooler for Creators ?…….Builders need a Foundation on where to Start building a Good computer. Case and CPU Cooler is a Good Starting point.
You don't compare price of the CPUs versus their performance. Bang for the buck. I don't see price in the compare performance. Even at the conclusion you do not mention price.
Well... they're in such a different class and use purpose that it very much depends on your use case if you need pcie lanes the TR is the way to go, but if you need fast CPU then the 13900k makes much more sense and is much better bang for the buck. It's like comparing the fuel consumption of a bus and an SUV if that makes sense. Depends what you need ;)
Price to performance doesn't really matter. If you're a company, youd never go for the consumer CPUs, with no ECC support. If you're a privaye, youd never go for the Threadripper, unless the performance or price were right.
All those Intel Gamer CPU's are fine, but they only have 20 PCIe lanes. GPU=16 lanes, 4 NVMe's+16 lanes. 10GBeLan=4 lanes. Add on card 4 NVMe's=16 lanes. 4ch Blackmagic card=8 lanes. 8ch Audio card=4 lanes. Only Threadripper can deliver 128 lanes.
This video should have been the Intel Xeon W7-2495X versus the Intel i9-13900K... 24 vs 24 on Intel both with PCIe5 and DDR5. Threadripper is old tech now with only PCIe4 and DDR4 etc. P.S. The 2495X outperforms the 13900 on multi-thread tasks and has more PCIe lanes etc.
@@Hansen999 - I recently built an Intel Xeon W7-2495X, ASUS W790 ACE, 512GB DDR5-4800, RTX-3090 system. It literally kills my R9-5950X, ASUS ROG, 128GB DDR4, RTX-3080 system in multi-threaded tasks. I run mostly high-thread-count software with the systems, and I purchased the Xeon because I need the 512GB minimum, I'm waiting for 256GB RDIMMs to get popular so that I can bump it up to 2TB. I looked at the TR Pro currently available, but saw them as dead-ends with PCIe4 DDR4, plus the TR Pro 24-Core system I priced out in Canada was $20,000 CAD, compared to the Xeon at only $12k. I decided not to wait for the next gen TR because they will no doubt be even more expensive here. We pay a premium for hardware.
@@Hansen999 because Threadripper pretty much killed Intel's HEDT. People back then used to have x58, x79 and x99 even for gaming. Nowadays most people don't even know what socket they're using. Threadripper simply conquered all the mind share when it comes to productivity PC and that's why people are obssessed with it.
The math is simple: 2XGpus 16x = 32 pci lanes, 4XNvme ssds 4X each = 16 lanes. We need a cpu with a minimum of 48 pci lanes to build a professional workstation. The alternative is to have a good chipset to switch very fast all the necessary lanes for the cpu.
@@vinylSummer Honestly NO: a) system drive, b) source drive, c) target drive, d) footage drive, e)scratch drive the results after the project are than afterwartds in the sinkhole NAS not needing PCI-Lanes and the scratch drive can be a 4 NVME card (in raid) alone using 16 lanes so in sum 16 GPU1 , 4 NVME 1(a), 4 NVME 2(b), 4 NVME 3(c), 4 NVME4(d), 16 NVMEcard5(e) in sum 48 lanes (not calculating a reserve for e.g. another card using pci-e-lanes . AND NO switching lanes via chipset is NOT an option as it DIVIDES the BANDWIDTH. The question is for maximal CONCURRENT throughput, not sequential throughput. Of course a usecase normal user (me included) have no usecase for. For a private user a fast switched solution via chipset is fast enough, as they won't notice if there is a limitation of bandwith due to switching. In interactive use the user is always slower than the computer and is the bottleneck. But if you have system, source, target, footage and scratch on the same physical drive you will notice the limitation (even on 2 physical drives)
Well, I would buy AMD regardless. I remember how "nicely" Intel treated me when they were on top and I refuse to support a company that resorts to illegal actions to go ahead.
for some of the workloads obviously. but when you buy a workstation CPU you generally do it because you need the extra CPU cores, memory bandwidth and/or extra PCI-e lanes. (and official support for ECC memory - this is very important for some)
For the average RUclipsr content creator, he couldn't care less about stability. For a content creation company though, ECC memory is critical. They would never in their right mind, choose an Intel consumer CPU, over any workstation platform, Intel or AMD.
@@akyhne correct, mission critical systems will generally have ECC memory. as a side-note, you can go with AMD consumer CPUs since you can find ECC support on many motherboards (but you need to do proper research on what works and what doesn't).
@@mariuspuiu9555 Yeah, I actually just bought an AMD Ryzen 9 7900 platform, with 8 gigs of ram, just to get started. But im concidering going for 64GB of ECC. I would probably just go for a ram module, listed on Asus website, for the MB. There are a few things, that I don't know. Does ECC support dual channel mode? And what the heck does this in my manual mean? "Non-ECC, Un-buffered DDR5 Memory supports On-Die ECC function."
soo....why are you being dishonest as to compare $1850 cpu against consumer $570 cpu? is it because if you grab comparable xeon processor it blows your amd out of the water?
No! It depends on who you are. A company would never choose the consumer CPUs, no matter the performance difference or price. And AMD smokes Intel, when it cones to workstation CPUs.
13900k has fake cores so it’s not even a fair comparison 😂 intel wants to make you believe that 24 cores beats the 16 true cores of 7950x when they both have the same number of threads. Don’t be deceived by the e core intel nonsense.
This is not a fair comparison, 5965WX has 48 threads, 13900k has 32... Try comparing 5955wx (16C/32T) with the 13900k instead because 13900k has 16 single threaded cores (e-cores) and 8 hypertheaded (2T a piece) performance cores.
@@akyhne AMD has less then 20% of the server market share. Q1-2023 AMD server market share was 18%. Their desktop share was 19.2%. Their mobile unit share (notebooks/mobile sector) was 16%. Not sure what they're "killing". Their GPU market share was just lost to Intel in the last month or two (and they've been making video cards for less then two years), with Nvidia dominating that at around 82%~. The only thing they have going for them is console's, and that's only because there are no modern intel powered consoles.
@@Smith555 I wasn't refering to market share, but their products. AMD CPUs beats Intel hands down! And since you know so much about market share in the server market, I'm sure you also know why Intel still leads. It's all about adapting to a new platform - rewriting code etc. If you look at the top ten list of supercomputers, AMD holds 1st, 3rd, 8th, and 9th place. Intel holds 10th place. That would have been unheard of, just 5 years ago. Supercomputers are also planned many years in advance, which is why AMD will probably overtake Intel, on the server market, within a few years. AMD server CPUs are more power efficient, and just plain faster, holding 300 world records, with their Epyc CPUs.
@@theTechNotice because it's basically a hybrid CPU more like arm and mobile cpus in Intel case their e cores aren't full p cores but more so offloading I'm not saying they aren't helpful but they don't have the pipeline for bigger task..I don't count them as full fat cores because they aren't they more like atom cores Intel has a problem with their design the power they draw that's why they went with this design to help increase performance without a mass power draw if they made 16 full cores with ht
There are 2 minor mistakes in this video:
1. 7:51 the single core score number of 5965wx should be 1365 not 2231. The percentages are still correct though.
2. 12:49 Davinci Resolve benchmark there are a few '-%' that should be RED instead of green.
Thanks for pointing them out guys! 🙏😇
Also at 7:51 Your multi value for 13900K is 25469 and the 5965WX is 25469 yet percentage gain on 13900K is listed as 1.90%
simple : cost intel wins , performance amd barely makes a win
And at 2:53 where Zen3 and Zen4 are listed as Process Nodes but they're actually just architectural codenames. The process nodes are TSMC 7nm and TSMC 5nm.
yeah, I saw the green -% numbers, figured that was one of the mistakes
That is very confusing for someone trying to read the data.
During Davinci benchmark, the negative number should be in red but instead it is in green. If this is the wrong part you were refer.
Thanks for pointing that out. Made a correction in the pinned comment! ;) 👍🙏
I love my 13900K, work primarily in Cinema 4D and Photoshop (and some Davinci here and there), and its the best money I've spent in recent years. One thing often overlooked, is the power consumption during medium and light loads, because sure when I'm running sick renders for hours it does eat a fair bit of power, but in my overall workflow I don't record more than 5-10% of high load. Most of the efforts spent during modelling, painting (I use Photoshop as a design tool more than a photo editing tool), and creating various assets, the 13900K sips power. The e-cores really are doing an amazing job, under normal load it uses the same power as my old cpu did during idle, which is more or less the same as the current day Ryzen chips. In my experience it's only when you push past 100-150w that the scale starts turning around, but overall I save so much power during the 90% of my workload that isn't pushing the cpu to its high capacity.
I regard it more as a hyper efficient cpu for common workload, that then calls in the 24 core cavalry when important battles have to be fought :D
Windows 11 or 10?
@@danielvipin7163 Windows 11 and Windows 10 does a little
Windows 10 21H2 also support e cores, but windows 11 do it better
@@danielvipin7163 Win10, but afaik the differences are more or less nonexistent at this point. It was only the first month or so that there were big differences in the hybrid core utilization
How much DDR5 memory you running?
@@Filmmaker809 32gb. It's low for my liking but waiting for 48gig sticks to come down in price and jump to 96 (mostly 3D work benefits here for my tasks)
For my 13900k and 14900k I have done the following: MCE off, PL1 and PL2 limit to 225, limit P-core boost to 5.5 GHz and E-core boost to 4.3GHz, and use balanced power profile in Windows (although I do disable core parking to keep system highly responsive). Oh and just XMP on the RAM. I didn’t change LLC value. I have set voltage offset at a modest -0.015v and set the Core limit to 300 Amps. I have disabled the C6&C7 C states and EIST. Lastly I have locked AVX at 0 offset. I have tested on P95, CB R23 and CB R15. All great and in a mid 20 degree room, no workload exceeds 80c on package or cores. Very happy and benchmarks are very close to where they were before taming these beasts.
If I can point 1 thing that so far nobody said is that Threadripper, like Xeon, are built for stability, to stay operative 24/7 so if you're a creator that use Adobe package go to I9/R9 while if you working on 3D packages and you rendering (Maya, 3Ds Max), simulating (Houdini), game designing/virtual production (Unreal Engine/Omniverse) you go to Tr/Xeon with ECC RAM because you dont want your software crash or your pc bluescreen (yes, I'm having this problems with my R9 3950X). Great videos tho!
Just blame every minute of lost sleep on Adobe.
You'll sleep better.
@@normal_human I don't think I understand your comment, I'm using Photoshop and Premiere Pro sometimes and I don't have stability issues with those softwares. Where did I lose my sleep exactly?
@@G4ggix Sure you do. Everyone does.
@@normal_human If you say so..
DAW 1st , Editing 2nd Gaming 3rd . . . . pleased I've just snapped up a Mini-iTX z790 ; The build begins! Thanks for this
I couldn't handle my system issues anymore during heavy After Effects workloads, which is every single project. So I jumped on the 5965wx from Puget. Probably gonna pair it with 2 4090s for a multi monitor setup. But I badly need a stable system with better storage and pcie capabilities.
Thanks for the review!
Nobody cares!
"I couldn't handle my system issues anymore"
@bernardsantos210 are you from USA? 😂
He literally just made this comment to try and flew but nobody cares.
Very informative. Thank you for this. Must have taken a lot of time and effort to compile these results, much appreciated.
Thanks for the detailed comparison bro .❤
Too late, just finished building my i9-13900K system but nice to know. Great videa as always! One of your other videos helped refresh my mind on computer builds. Price per performance for most applications ... i9-13900K wins out if you dont mind hot and power hunger. I wish you would have measured the i9 at stock limited power of 253W since that is the better way to handle that chip. Plus actions can be handled by the GPU even in Vray. So I would save my money on the cpu and upgrade the gpu.
I have the 7950X3D. I stream at twitch on 1080p placebo OBS setting when 4K ultra gaming with RTX 4090 at the same time. İt will run better with new next gen Nvidia flagship cards.
Being on a newer platform and the fact that the 13900k wins in nearly every time-sensitive application - video playback, DAW work with typical mixes, etc - means you made a good choice. The 13900k doesn't really get that hot unless you're blasting it in cinebench as long as you have a good cooler. I have a 13700k overclocked to 5.5GHz and don't go above 110W with it while gaming, streaming, youtube video, discord and so forth running at the same time. So it sits at 50-55C under load. The TDP is not in any way representative of how much the 13th gen Intel chips actually use in typical workloads.
@@CyberneticArgumentCreatorccording to TECH YES CITY the 13900k and S series is the worse latency in workstation.
He downgrade to 10gen because of that issues.
@@CyberneticArgumentCreatorexactly! I’ve been rendering 3D stuff on corona renderer with 100% utilization at 270w and 85°C
I turned off multi core enhancement in bios (was gonna undervolt but didn't) and my max temps on 13900k is 60c
Love your style and delivery... great tech channel.
1) AM5 CPUs actually have 28 PCIe Lanes, with 4 connecting to the Chipset and 16+4+4 General Purpose (or 8+8+4+4 w/Bifurcation). You are still only guaranteed 16+4 Lanes as the other x4 may be used for onboard high speed peripheral connections, although MAY be left vacant for a 2nd CPU-Direct connected NVME. Depends on Motherboard/Manufacturer.
2) Intel LGA1700 CPUs have the equivalent of 28 lanes with 8x DMI lanes from CPU to Chipset and 16+4 General Purpose (or 8+8+4 w/Bifurcation). Intel chipsets also support far more high speed Lanes with a mix of Gen4/3 on Z690 and Gen 5/4 on Z790.
For Workstation, Choose AMD. For high End Consumer Desktop and PCIe Connectivity/Performance, Choose Intel. For 2nd Tier (B650 AMD, B660 Intel), Choose AMD. If you don't care about PCIe Lanes, choose whichever you like.
What am5 motherboard that have 28 PCIE lanes?
Thank you for your benchmark comparison. There were almost no benchmarks of threadripper + lightroom at all.
The TR Pros have varying amounts of L3 cache: 12-16 core 64MB, 24-32c 128MB, 64c 256MB. It's a shame there's no lower core count 256MB versions, but they probably wouldn't be any cheaper considering what they charge for the frequency-optimized EPYC 7003 series. I'm gonna try the 7950X3D, 7900XTX, 100GbE Chelsio NIC, and two PCIe 4.0 Optanes. The CPU, RAM, and motherboard are about ⅓ the price of the 5965wx platform though though it's also ⅓ the memory bandwidth, provided 4x32GB sticks of ECC runs at 4800Mbps. Kingston has some Hynix M-die ECC UDIMMs that are QVL for 4800 but they're on backorder, so I'll try the Crucial/Micron A-die for now, and we'll see what the recent Agesa update brings
I have the 7950X3D. I stream at twitch on 1080p placebo OBS setting when 4K ultra gaming with RTX 4090 at the same time. İt will run better with new next gen Nvidia flagship cards.
@@lolohasan6424 good to know, but it's gotta be Radeon because of the application. If there were more PCIe lanes it could have both
For render, if you are rendering on GPU, have a max of 2 GPUs, the 13900k or the 7950x with good chipsets are better options because in modelling the single core performance is more important, and it is the part that consumes most of my day. I would go for Threadripper only if I need more GPUs.
Very good conclusion!
Thats exactly what I am doing with two suprim hybrids and i9-13900K. best performance per $$$ ... and no OC needed with that setup! Blender gooseberry with only one card 15 seconds per frame.
Hey, thanks. Just didca 78003dx, and a i9 13900k this week. Love both of them.
Threadripper is a class above and cost of the Workstation build is way higher (like the cost of a new car). don't get Threadrippor unless you really have the technical expertise to know why. I am very happy with my i9 13900k with twin NVlinked 3090's (that i was lucky to get at a fair price from ebay). it is a powerhouse, not cheap, but not the price of a new car either. i have not even gotten around to using all the performance boosting features and have not had any reason to do so.
I'm usnig a high binned 13900KS at 6.2, this cpu is a beast even for production and power consumption.
Have you noticed the stuttering issues with your 13900k, as reported by Tech Yes City?
ssshhh, this is an Intel plug video
On premise server, 3d model, fluid dynamic simulation, machine learning.
That's what these threadrippers are for, not a Photoshop box 😤
At my roommate's job, his department share a single server with threadripper 3000, 32 core, 4x Radeon w6800 pro. I don't understand well what his job really is but it's sth to do with steel structure in construction.
Intel 13900k with rtx4090 don't even have a chance with that old machine.
you mention the memory bandwidth comparison but didn't show how effective it is in this comparison if you were to add the extra sticks on the threadrippers so not sure it is a fair comparison if you are limiting the threadripper .... adding the extra sticks on the other non-TRs will limit their performance because of their mem-controller but not utilizing the extra performance you can get on a TRs just leaves me thinking that I'm not getting the full story.
Yeah it doesn't really make sense.
For reference my Epyc 7302 gained 800 points in Cinebench R23 (running in a VM with 14 cores allocated) just by populating all 8 ram slots.
What do you guys think about the HP Omen GT22-1957nz (Intel Core i9 13900K, 64 GB, 4000 GB SSD, Nvidia GeForce RTX 4090)?
For 6590.85 lari I bought an Intel Core i9 13th generation computer. RAM size :
64 GB solid-state drive capacity :
Graphics processor with a capacity of 4 TB :
Intel ® HD Graphics 770 Motherboard Model :
ASUS ROG Matrix Z690-E Gaming WiFi 6 ELGA 1700
AMD RYZEN THREADRIPPER PRO 24 CORE PROCESSOR 5965WX 3.8GHZ AMD is worth -7971.84 lari
@thetechnotice non of the benchmark tools used in this video will show the power of the 8 channel memory on threadripper. You need to use computational fluid dynamics cfd bench tools and a really large cfd file and only then you will see how much of a great monster the threadripper cpu really is
Unrelated!
This video was about the best CPU for creators, not for scientists.
@@akyhne engineers are creators, we actually design and create things.
@@besssam It doesn't really matter, that you make up your own meaning of the phrase "content creator". The video is not for you.
If you need more focused tests towards engineering and scientific calculations, go to Level1techs RUclips channel. He does plenty of testing with Threadripper, Epyc, Xeon and alike testing, for your needs.
This channel is more geared towards content crrators. Just as Gamers Nexus is geared towards gaming.
and how will that boost Intel sales?
@Technotice
7:59 the difference in the processor
13:08 resolve the percentage is green
Thanks for pointing that out. Made a correction in the pinned comment! ;) 👍🙏
Thanks for this comparison, I was wondering myself if I should get a 64 core lower clock speed or a high clock speed 16 / 24 core.
What you think about threadripper 7000 serie coming ?
For workkstation
One glaring mistake is that you did not include the direct competitor to Threadripper Pro which is Xeon W. Let's see you do a Threadripper Pro vs Xeon W (and possibly vs Epyc) test. No faffing around - let's have 512 GB RAM, multiple NVME drives, 10+ Gb ethernet, and so on.
I think the mistake was some negative indicators were actually in green even though they should have been in red. This was in the segment at minute 13.
I’m surprised that you haven’t covered latency in your reviews. Just benchmarks.
Thank you!
7:59 - I guess one mistake here is the Geekbench scores?
The single core scores that you displayed for the 5965wx, 7950x and 13900K are all the same at 2231, yet the difference that you highlighted is different for each of them.
Then the Multi scores for the 5965WX vs the 13900K...
You scored the same at 25469 for both CPUs, but highlighted a 1.90% difference for the 13900K, I'm assuming you pasted wrong score for the 13900K.
12:43 - Hehe score coloring in the Pugetbench for Davinci Resolve, namely the 4K and 8K media tests. 13900K and 7950X numbers should be red, not green.
Thanks for pointing that out. Made a correction in the pinned comment! ;) 👍🙏
You've got a good eye!
@@theTechNotice No problem and solid vid either ways! Barely much content covering these HEDT semi-server chips so i'm glad i subbed to ya ❤👍
@@theTechNotice also at 1:38 "..it's got 2 cores per thread".. should be "2 threads per core"
(Not an expert)
(Not an Intel employee/fanboy/shill)
Interesting how no mention of Intel's CET in its 12th/13th gen chips. That feature alone, gives Intel the lead, considerably in my opinion. To my knowledge AMD doesn't really have anything like that. Unless someone can link to documentation stating otherwise.
All of the tech channels seem to overlook it, and I think it's something to consider. If you are not playing AAA games often and do not plan on streaming than you really don't need alot. I think even if you do, Intel is still the best way to go.
Since that's a security feature, and these channels only cover the numbers "cores, count, speed, fps, etc etc etc".
I agree though and you make a valid point. I think Intel's CET in their newer processors (12/13 gens) are really worth considering when buying a processor today in 2023.
@11:00 Premiere Pro the last 2 values for the 7950x are wrong or wrong coloured
The other 2 were already mentioned
All these 4 chips are friggin amazing. Its all about price, ego and the intended use.
Or two of the three… ego should never be a factor. Or a self reflection and opposing factor in enforcing one of the other two… logic for the win hopefully haha
@@damonm3
It's buoyancy of all three existing in harmony... Like the tri state of water.
@@pamus6242 eh, I think ego isn’t essential or beneficiary in any way. So disagree and I’m set on proving it!!! (Ironic joke 🍻)
Wish you would add some Touchdesigner or Resolume benchmarks to your testing.
Hi
Are you planing to make another video for new Threadripper?
Yes!
5965WX is a workstation cpu, it’s a workhorse for heavy calculations like cad and cam design or scientific calculation or simulation.
And still has no Dual Threadripper motherboard for it. A horrible crime
The table at 8:24 seems to be incorrect. Single core values are the same for three of the cpus
Edit: I saw the pinned comment
my 13900k works with all 4 dimms at 5200 mhz very stable not sure where you get the 4000 mhz from
Looks a good "cheap" server option for a "small" ESXi installation.
I didn't see the mistake (or plant?)... I was working and listening to you at the same time and occasionally looked at the video. Great video, as usual, by the way.
8 memory channel is the most important for threadripper mainly for CAE simulation workload. otherwise consumer grade 2 channel i9.
I own and use both a Threadripper 5975WX and a 13900KS CPU.
What is the intel equivalent of the 5965 WX processor
you would only want the 5965WX for the pcie lanes or ecc memory...otherwise a waste of money.
For productivity you either go with the best of the consumer chips for best price to performance or you just go with the 64 core if you need raw power. all the threadrippers in between are pretty much irrelevant if you ask me. i mean sure for someone it might hit the sweetspot, but if you use 32 or 48 cores, chances are pretty high you could also put 64 cores to use.
Im guessing Typos for the Geekbench 5 results.
2231 all the way lol
Thanks for pointing that out. Made a correction in the pinned comment! ;) 👍🙏
I have three ASUS quad cards and one Highpoint quad card. They need 4 PCIe sockets with 16 cpu lanes each. OCTA Channel ram 3200C14 has four times the throughput of 3200C14 dual channel ram on a PC. Old software written for 4 core Intel wants faster clock speed, not a higher core count.
The TR supports RDIMM ECC and is for 24/7 processing. You don't buy this for single core software unless you will run VM's.
what happends in games ?
What is the Best Value Case, CPU, and Cooler for Creators ?…….Builders need a Foundation on where to Start building a Good computer. Case and CPU Cooler is a Good Starting point.
You don't compare price with performance, so how can you tell the best CPU for the best bang for the buck?
That’s why people like GamersNexus exist
Sorry I'm not sure I'm following what you're trying to say? :)
You don't compare price of the CPUs versus their performance. Bang for the buck. I don't see price in the compare performance. Even at the conclusion you do not mention price.
Well... they're in such a different class and use purpose that it very much depends on your use case if you need pcie lanes the TR is the way to go, but if you need fast CPU then the 13900k makes much more sense and is much better bang for the buck.
It's like comparing the fuel consumption of a bus and an SUV if that makes sense. Depends what you need ;)
Price to performance doesn't really matter.
If you're a company, youd never go for the consumer CPUs, with no ECC support.
If you're a privaye, youd never go for the Threadripper, unless the performance or price were right.
must be nice if we have 8 channels of ddr5 with 13900k or 7950x performance processors
All those Intel Gamer CPU's are fine, but they only have 20 PCIe lanes.
GPU=16 lanes, 4 NVMe's+16 lanes. 10GBeLan=4 lanes. Add on card 4 NVMe's=16 lanes. 4ch Blackmagic card=8 lanes. 8ch Audio card=4 lanes. Only Threadripper can deliver 128 lanes.
I'd be curious to update this and instead of 13900K use the Xeon W-2495X cpu instead. That way compare apples to apples
This video should have been the Intel Xeon W7-2495X versus the Intel i9-13900K... 24 vs 24 on Intel both with PCIe5 and DDR5. Threadripper is old tech now with only PCIe4 and DDR4 etc.
P.S. The 2495X outperforms the 13900 on multi-thread tasks and has more PCIe lanes etc.
Agree - I don't get the obsession of the old Threadripper, especially when you know a new version is around the corner.
@@Hansen999 - I recently built an Intel Xeon W7-2495X, ASUS W790 ACE, 512GB DDR5-4800, RTX-3090 system. It literally kills my R9-5950X, ASUS ROG, 128GB DDR4, RTX-3080 system in multi-threaded tasks.
I run mostly high-thread-count software with the systems, and I purchased the Xeon because I need the 512GB minimum, I'm waiting for 256GB RDIMMs to get popular so that I can bump it up to 2TB.
I looked at the TR Pro currently available, but saw them as dead-ends with PCIe4 DDR4, plus the TR Pro 24-Core system I priced out in Canada was $20,000 CAD, compared to the Xeon at only $12k.
I decided not to wait for the next gen TR because they will no doubt be even more expensive here. We pay a premium for hardware.
@@Hansen999 because Threadripper pretty much killed Intel's HEDT. People back then used to have x58, x79 and x99 even for gaming. Nowadays most people don't even know what socket they're using. Threadripper simply conquered all the mind share when it comes to productivity PC and that's why people are obssessed with it.
@@RafitoOoO Very true but Sapphire Rapids has arrived and it dominates the current Threadripper even with less cores.
Which is winner?
thanks
Wonder if the am5 platform will see threadripper soon.
The math is simple: 2XGpus 16x = 32 pci lanes, 4XNvme ssds 4X each = 16 lanes. We need a cpu with a minimum of 48 pci lanes to build a professional workstation. The alternative is to have a good chipset to switch very fast all the necessary lanes for the cpu.
Honestly GPUs are fine with 8 4.0 lines each, and man, 4 ssds? Isn't 2 more than enough? 24 pcie lanes is enough
@@vinylSummer Honestly NO: a) system drive, b) source drive, c) target drive, d) footage drive, e)scratch drive the results after the project are than afterwartds in the sinkhole NAS not needing PCI-Lanes and the scratch drive can be a 4 NVME card (in raid) alone using 16 lanes so in sum 16 GPU1 , 4 NVME 1(a), 4 NVME 2(b), 4 NVME 3(c), 4 NVME4(d), 16 NVMEcard5(e) in sum 48 lanes (not calculating a reserve for e.g. another card using pci-e-lanes . AND NO switching lanes via chipset is NOT an option as it DIVIDES the BANDWIDTH. The question is for maximal CONCURRENT throughput, not sequential throughput. Of course a usecase normal user (me included) have no usecase for. For a private user a fast switched solution via chipset is fast enough, as they won't notice if there is a limitation of bandwith due to switching. In interactive use the user is always slower than the computer and is the bottleneck. But if you have system, source, target, footage and scratch on the same physical drive you will notice the limitation (even on 2 physical drives)
teal deer version:
what you are paying for are PCI lanes and memory channels.
Theoretically that means 5995WX is better cpu than 7800X3D for gaming since it has more L3 cache?
We'll not really coz the single core performance is much lower.
I guess as a gamer, a Threadripper is an overkill. now I just need to know if installing 2) RTX 4090 24GB is overkill for a gaming rig.
Unless you’re mining crypto, the answer is yes!
the DaVinci resolve negative numbers are green instead of red
Thanks for pointing that out. Made a correction in the pinned comment! ;) 👍🙏
Well, I would buy AMD regardless.
I remember how "nicely" Intel treated me when they were on top and I refuse to support a company that resorts to illegal actions to go ahead.
I've never used keys on a windows For a long time since I've learned the ways of pirating.
What's wrong? That's a terrible choice of shirt color, given the shades of your background 🙂
13900k boost clock is 5.8 not 5.6 and most will do 6 or 6.1 with a good mobo.
I paid $800 for a 2990wx in 2019, still pretty fast for what it is an will do 4 GHz all core easy.
the numbers on the geekbench and davinci resolve were wrong
Thanks for pointing that out. Made a correction in the pinned comment! ;) 👍🙏
no doubt the intel cpu is much better, not to mention way cheaper
for some of the workloads obviously. but when you buy a workstation CPU you generally do it because you need the extra CPU cores, memory bandwidth and/or extra PCI-e lanes. (and official support for ECC memory - this is very important for some)
For the average RUclipsr content creator, he couldn't care less about stability.
For a content creation company though, ECC memory is critical. They would never in their right mind, choose an Intel consumer CPU, over any workstation platform, Intel or AMD.
@@akyhne correct, mission critical systems will generally have ECC memory.
as a side-note, you can go with AMD consumer CPUs since you can find ECC support on many motherboards (but you need to do proper research on what works and what doesn't).
@@mariuspuiu9555 Yeah, I actually just bought an AMD Ryzen 9 7900 platform, with 8 gigs of ram, just to get started.
But im concidering going for 64GB of ECC. I would probably just go for a ram module, listed on Asus website, for the MB.
There are a few things, that I don't know. Does ECC support dual channel mode?
And what the heck does this in my manual mean?
"Non-ECC, Un-buffered DDR5 Memory supports On-Die ECC function."
7:54 mistake
7:53 Geekbench single scores are the same
Thanks for pointing that out. Made a correction in the pinned comment! ;) 👍🙏
👏👏👏👏👏👏
9x the price for double performance in some apps, hmmm
3 times cheaper = Price - 3 x Price= - 2 x Price
Please don't use this phrase.
Geekbench 5 and DaVinci Resolve hava data mistake
See pinned comment.
this is kind of an unfair comparison, intel hasn't released their next gen xeon chips yet
How is ut unfair? He's not comparing it to any old Xeon and if he was, it would be fair comparing what's on the market.
RTX A5000 24 GB & A6000 48 GB Review Specially on blender........
I9-13900k has max.freq 5.8 GHz
should have compared 3 i9 with one threadripper 😂
Love the content but seriously the charts sucks. It took me a while to understand it
✌✌
soo....why are you being dishonest as to compare $1850 cpu against consumer $570 cpu? is it because if you grab comparable xeon processor it blows your amd out of the water?
With the amount of crap news coming from intel processors i guess now there is no more doubt about what to buy. 😂😂🤣
13900k is 5.8GHz not 5.6
and it also doesn't pull 340w. mine never pulled more than 250
X10 the price for X2 the performance.... AMD loose it.... RIP AMD
😂😂😂
No! It depends on who you are. A company would never choose the consumer CPUs, no matter the performance difference or price.
And AMD smokes Intel, when it cones to workstation CPUs.
conclusion: Threadripper is bad option for gaming !
Second!
Green negative scores... BTW, most results are, actually, advertisement for 13900K!
13900k has fake cores so it’s not even a fair comparison 😂 intel wants to make you believe that 24 cores beats the 16 true cores of 7950x when they both have the same number of threads. Don’t be deceived by the e core intel nonsense.
Wtf
This is not a fair comparison, 5965WX has 48 threads, 13900k has 32... Try comparing 5955wx (16C/32T) with the 13900k instead because 13900k has 16 single threaded cores (e-cores) and 8 hypertheaded (2T a piece) performance cores.
third
Guys become an Intel shill sadly
Intel is always better on any CPU compared to any Intel Is best no matter the AMD fanboys said
AMD kills Intel, in the workstation and server market.
@@akyhneAnd Console, MINI PC, Gaming cpu and 3d cpu.
Nuc intel are dead because of AMD IGPU.
@@akyhne AMD has less then 20% of the server market share. Q1-2023 AMD server market share was 18%. Their desktop share was 19.2%. Their mobile unit share (notebooks/mobile sector) was 16%. Not sure what they're "killing". Their GPU market share was just lost to Intel in the last month or two (and they've been making video cards for less then two years), with Nvidia dominating that at around 82%~.
The only thing they have going for them is console's, and that's only because there are no modern intel powered consoles.
@@Smith555 I wasn't refering to market share, but their products. AMD CPUs beats Intel hands down!
And since you know so much about market share in the server market, I'm sure you also know why Intel still leads.
It's all about adapting to a new platform - rewriting code etc.
If you look at the top ten list of supercomputers, AMD holds 1st, 3rd, 8th, and 9th place. Intel holds 10th place.
That would have been unheard of, just 5 years ago.
Supercomputers are also planned many years in advance, which is why AMD will probably overtake Intel, on the server market, within a few years.
AMD server CPUs are more power efficient, and just plain faster, holding 300 world records, with their Epyc CPUs.
@@akyhnein the top500 supercomputers intel has 90% supremacy
First
I9 still kills 7950x and partily 5965wx
Time to block your channel
13900k isnt a true 24 core CPU
The e cores are weaker but they are still CORES. They arent AI Generated.
What do you mean by that? @chriswright8074
I think if it had 24 p-cores?
How come e-cores aren't real cores?
@@theTechNotice because it's basically a hybrid CPU more like arm and mobile cpus in Intel case their e cores aren't full p cores but more so offloading I'm not saying they aren't helpful but they don't have the pipeline for bigger task..I don't count them as full fat cores because they aren't they more like atom cores Intel has a problem with their design the power they draw that's why they went with this design to help increase performance without a mass power draw if they made 16 full cores with ht
whokeys is a scam