@@andljoy Doesn't really help when your productivity software is Windows-only. I'll stand by this: asymmetrical CPUs where simple core scheduling can cause a massive negative impact is simply terrible CPU design. If you want to save power or costs then just buy a cheaper CPU - and you also get consistent performance on top for free as well.
I'm now a pure Linux user on my desktop. Gaming with Bazzite. Best experience since Windows XP. 😁 I say "desktop" because I use a MacBook Pro as well. The macOS we know today was born from Linux (NeXTSTEP), so I guess I've just favoured that sort of environment over the past few years.
@@eQui253 nice, I’m using Debian 12. Arch Linux is pretty nice although I wanted to slow down the updates especially since I wanted to use ZFS and not have an update break anything. LFS is also a bunch of fun to compile. :)
As a game dev I can say that by default, windows can schedule the “main thread aka mainloop” multiple times per frame on multiples cores, same for any worker thread. Moving to another core is either costly directly : you have to copy all the registers, internal registers, stack, and the cache l1 , partial or total of l2 and l3,(and caches size is getting bigger and bigger, that’s a lot to copy), or do it “fast” and let the process ask for memory : copy all the registers, internal registers , stack, do not copy the cache, letting the process ask for a memory address get a cache miss, filling the caches for that call, making the first execution of the process on a new core, very costly than stay at least one frame, and better many frames on the same core. On console I always pinned all my threads to a specific cpu, on windows, yes you can see some gain, by pinning a thread to a cpu, but you could also see that the lowest fps is also getting lower, in some case much lower. The problem, windows is not a console. On a console we get a guarantee that a list of core are for the game, and that no system thread would run on those cores. On windows, you can pin your thread to a core, and run your mainloop as fast as you could, it works, for some time, until windows decide that your thread did run long enough, and interrupt you as soon as you make a system call , and that call, instead of lasting 80ns as usual can last 100, 400 ms , because windows decided to use “your core” for something else, trashing the cache at the same time. Cohabitation is possible, but it’s very very hard, and when you think you have the solution, you find that sometimes, when you start a thread, sometimes it start only 100ms later… those are the usual lags and stuttering in windows game. So thread pinning = good, on windows, it give some gain, but also some loss, that could be bigger. This is the reason most musicians prefer macOS, the thread scheduler, not having those 100ms lag than an app under windows could have. 100ms for a game or for midi sounds or wave, it’s a lot. And the cure, it to had a one or multiple frame delay, fixing frame rate but introducing lag. TLDR windows thread scheduler = bad
PS: The "solution" on Windows is to not CPU starve the OS So, give the OS time to do whatever he wants, but at a time that "suits" the game engine. Usually by adding a simple sleep(1); The 1ms, in this case, is not guaranteed; it depends on the OS, The more starved the OS is, the longer the sleep(1) is. So, the game manages the OS by giving it a percentage of its time. (upside down world) It's not perfect or "efficient," as you lose "compute time," but it works. I have yet to try "for science." There are system functions to change if a core can be used by Win! What happens if I transform my PC into a quad-core for Windows and all the rest just for my code to profile actual performance?
@@tiedye001 Same as CPU Affinity; you get more time, but you "piss off" more threads with an "important" job, so when the OS can interrupt you, the lag can be violent. Thread priority is suitable for a 3/4 ms task; for example, if you want it done fast, it's not ideal for constant performance 100% of the time. This is why a console can beat a more expensive PC by 20% to 30% between a similar CPU and the same frequency (Rainbow 6 Siege), between Windows and a PS4, for example.
I am so here for your analysis of this scheduling issue, i would really like to know wtf is up with windows scheduler and this seems an excellent case in point. All my machines run ProcessLasso, because i have to tbh for audio Take your time on it, ima watch the whole thing XD
A couple of my audio plugins can use AVX-512. For example, 2CAudio Breeze which is a nice reverb. I hope Zen 5 encourages audio developers to start using AVX-512 more.
@@HolarMusic Good to know. The only application I have for AVX-512 right now is the Topaz AI upscaler and GPU performance is an order of magnitude better than CPU.
@@budthecyborg4575As far as I'm aware AVX 512 workloads are still very much in the high complexity territory that CPUs beat GPUs at. GPUs tend to be best for stuff that can be broken down into a lot of very, very small tasks (or AI, but that's in no small part because of dedicated AI accelerator hardware)
@@bosstowndynamics5488AI inference is for the most part just multiplying your input with all the weights and biases in the model and AI training is just fancy matrix calculus, which both runs really nicely in parallel
Phoronix just released his review and the numbers on Linux are just incredible. Too bad the only thing we are going to hear is how bad those are for gaming... He came with a Geomean of around 17.5% over the 7950x.
That's the thing, the average results are heavily carried by avx 512. Nothing to do with linux/windows. I can flip it and say that 20 to 40% perf increase from double execution units width is just pathetic... Should be at least 70 to 90%
@@panjak323 you clearly didn't read the review and are trying to spread FUD, and you don't understand geomean. The graphs include AVX workloads removed and it's still way up. God the comment section is so brain dead.
Blame HUB and Gamers Nexus and all of their cronie tag along creators that dote on their every word, for that kind of rubbish. It's not all about benchmarks, it's about real world utility and performance which they fail to understand for a 12-16+ core CPU. Reviews from people with real expertise are much better.
One of these days people maybe get they are just amateurs wannabe standard just because one has a lab and another overbenchmarks like there is no tommorow.@@moonstomper68
@@tuckerhiggins4336 The reason you'd benchmark a $200 CPU using a rig with a 4090 is because you don't want it (and all the other CPU's as part of the testing to compare it to) to be constrained by the GPU. If you tried benchmarking a $200 CPU in a game and used a $200 GPU as well, all the CPU results on the higher end of the chart are going to cap out at the same framerate because the GPU is the bottleneck. The benchmarks are meant to compare one CPU to another using the same environment variables in order to show relative performance and value, not to show "how fast does this game run on my PC if I buy that CPU?"
Agreed. As much as I love GN and HUB I'm getting tired of them ignoring Linux. (GN shows Chrome compilation is a good step in the right direction.) When SOME people such as Wendell or Phoronix ARE seeing a performance uplift in Linux then *we should be asking:* _Is the Windows task scheduler problematic for Zen 5?_
Exactly, I mentionned it on hub and their response was to re test on Windows not changing shit or bothering to investigate the issue at all. Like guys, isn't this your job?
Michael Larabel at Phoronix loves them. Particular attn, the avg scores, like avg of all creator, avg of all database, avg of all games ... About the only thing the 9950 doesn't mop the floor with, is power efficiency. The lower 9ooo series do rather better there. There are a few outliers where the intel 14th gen does stunningly well, but those are outliers. Compiler performance is stunning!
I'm guessing the 9950x can shine on power efficiency if power limited, maybe set tdp to 105W which would effectively be an eco mode on a 170W default tdp. I want to see those benchmarks. The main issue is memory bandwidth. If your workload uses avx512 in particular an all core workload probably easily saturates 16 cores, it likely saturates 8 cores, it's funny that 6 core zen5 may be the most sensible way to run these types of workload.
Linux Reviews from this channel and Phoronix Review clearly show a major difference between the 9950X and 7750X, where as tech sites and channels using windows to review the CPU's show the gains to be marginal or even to have regressed in some cases. This sounds like a windows issue more than anything. Windows really sucks.
SO, the most widely used platform thats been out for years.. and a new piece of hardware comes out where the hardware company makes the drivers.. and its the OS fault?
I’m not a gamer, just a photographer and fine art printer. So, even with all the bad news and confusion re the Ryzen 9000 series, I decided to go ahead and replace my 7900 with a 9900X anyway. I have an MSI MPG B650i EDGE WiFi motherboard in an open case with an ID-Cooling SE-207-XT air cooler to which I have attached a second fan. The 7900 runs with DDR5-6000 memory, Game-Boost and the TDP elevated to 105W. It pulls 142W running Cinebench R23 with a high temperature of 79C for a multi-core score of 27550. When I replaced the 7900 with the 9900X, with the BIOS cleared except for EXPO, Cenibench R23 scored it at 32214 with a high temp of 81C. That’s a 17% improvement, much better than I expected. Think I’ll be keeping the 9900X and seeing how much I can wring out of it with a better cooler. I have no intention of playing the core parking game. BTW, B&H is selling the 9900X for $50 less than everybody is reporting. They also pay the tax if you use their PayBoo card. No connection to B&H, just a happy customer for many years. Some good news re the 9000 series is overdue. I appreciate that gaming drives the technology, but other users make up a large part of the marketplace, so it seems wrong for gaming to influence the whole picture. Even though AMD fumbled the rollout, the 9900X is still a great CPU for non-gamers.
@ - Thernmalright Phantom Spirit 120, dual tower 120mm. Good up to 180W. I’m overclocking to that power level and getting 34000 multicore on Cinebench R23, 2222 single-core.
great review, thanks. very level headed and no hyperbole and unnecessary drama. Too many youtubers are concentrating purely on gaming and not as a whole package. I agreed with your conclusion, spot on.
I believe some reviewers cater to a younger audience, and I find the lack of professionalism off-putting. Being on RUclips doesn't necessitate clickbait for reviews. This is why I've mostly returned to reading website reviews, such as those on guru3d.
@@ThaexakaMavro Chips and cheese should give a more in depth architectural analysis of zen 5 and shows regressions in several instructions as well as memory. Memory regressions alone would explain the lower than expected gaming performance.
Here's something we're seeing in a test of the 9950X compared to 7950X, which we did as a ProxMox test. Inside of ProxMox hosts, we're seeing a significant improvement vs. at the wall power utilization. But in Windows itself, when non-virtualized (not using a QEMU CPU) the benefits are not being felt in any real mode; we don't really test for gaming. Now, obviously, we're comparing a point that is below straight to hardware to hardware virtualized. But the benefit does show up, and that is interesting.
Microsoft is more concerned with making money off their customers than optimizing performance on so many fronts. Gaining market share and taking the hit to their brand in windows is not on their radar when Ai and azure is making the shareholders purr.
@grimfist79 He literally called out that, as a gamer, it's not wise to upgrade from Zen 4 to Zen 5. But there are more workloads than just gaming. You need to stop pretending that everyone who disagrees with you is a fanboy. He said he found some performance discrepancies that he needs to investigate further. If anything, he's more anti windows than pro AMD
Zen5 may well have potential, but I chose to follow Wendell's advice to buy a CPU based on how it performs _now_ rather than on some nebulous possible future capability. This morning, after checking the last round of Zen5 reviews, I bought a 7800X3D. It will anchor my main gaming PC for the next 2-3 years. Maybe Zen6 will make good on Zen5's unfulfilled promises.
Yeah. I bought a threadripper 7970X after zen 5 was delayed because at that point zen 4 was rock solid, I'd been on a 7950X for 18 months and I just needed more for my work.
First! And the first ryzen9 reviews I've seen! That admin/windows trick thing makes me think in a couple months from now these processors will be a little bit better once either people or amd/windows figures their stuff out. Also still makes me excited for the x3d variants
Here's an idea for benchmark: Game + OBS streaming (or recording). Because many workloads today aren't just the game running by itself so I wonder how would it all look like in scenario like the one above.
For couple of months now I play on Windows with SRV-IO and IOMMU turned OFF, along with Memory Integrity and Core Isolation also turned off. The difference in 3DMark results and gaming experience are confirmed for the better. I would like to note, this is my gaming only machine, I do have another one for AI LLM testing with 7900 (no X) where all these options (in BIOS at least since there I run Ubuntu) are turned on. But it surprised me that turning those OFF for gaming, made by 3DMark results better, and my gaming with Freesync enabled monitor with vertical sync limit to 120HZ on 5120x1440 rarely drop in CoD MW3 multiplayer (even for a second or two). My GPU is 7900XTX from Sapphire.
This is interesting, I haven't heard about SRIOV and IOMMU affecting performance. You're not running Windows in a hypervisor? Do you have a ballpark estimate of how much impact it had?
@@DanielKennedyM1I suspect that the entirety of the difference came from the software side - SR-IOV isn't even available on consumer hardware and the UEFI option is really just whether or not to let network cards report to the OS as multiple separate devices, and enabling the IOMMU doesn't change any of the performance characteristics of the chip. On the software side though, core isolation and related features mean that W11 kind of sort of does run on a hypervisor, by default, because it uses Hyper-V to sandbox some of the drivers and other system components to prevent privilege escalation. It's honestly one of the few ideas in W11 I actually like (assuming it's actually securely implemented), brings some of the security by virtualisation stuff from niche systems like Qubes to mainstream users, but naturally that does come with a bit of a performance overhead and that difference is enough that gamers who don't understand the security implications wind up disabling it (or of course power users who don't run sensitive workloads on their gaming system like the above commenter who knowingly take on the risk)
I'm wondering if windows is simply using Intel code hence why everything seems to run faster on Intel? I mean, Intel and MS have always worked together (wintel name exist for a reason) and the OS is favoring Intel . Time to recreate all these benches using Linux!
I'm not really a huge fan of tin foil heading but I do think this is somewhat true. Intel and Nvidia have worked many times together with Microsoft. Especially Nvidia has the reputation to be in frequent contact. Meanwhile AMD has quiet the reputation of treating their software partners very poorly, so this might actually be a thing.
Thank you Wendell for all the work. Great review and personally I love to see someone who's leaning more on the relative than the absolute side of conclusions.
I can't believe after all this years Windows scheduler still does not fully support AMD's dual CCD layout so that AMD has to resort to crazy hacks with game bar and drivers to basically shut down half of the CPU you paid for to avoid performance degradation caused by suboptimal scheduling.
All these years? All these 1 years? Zen 5 is the first time AMD has had any need for special scheduling for symmetric dual CCD chips, prior to this it was only the 7900x3d and 7950x3d that needed advanced scheduling and most x3d buyers were going for the 7800x3d anyway. Yes, Microsoft absolutely should have fixed it (particularly since Linux already accounts for CCDs apparently) but it's not like it's been many years of wide deployment of the core parking stuff
@@bosstowndynamics5488 "symmetric" does not equal monolithic die where all cores have uniform access to caches and memory. As each CCD carries its own caches having threads reassigned between them randomly, or threads of a single process running on multiple dies incur significant penalties due to cache misses and synchronization. Dual CCD design first debuted in Ryzen 3000 series, or even earlier if you count 1st gen Threadrippers. So yes, all these years and Windows still suffers from these problems. I'm not sure how Intel does it with their non-uniform core designs, but it seems they were able to convince MS to care at least somewhat. While AMD's best effort is to hook up into game bar logic and disable half of your CPU.
@@MikeKrasnenkov Saying Windows "still" suffers from these problems is misleading - the performance penalty from communication between CCDs in everything up to Zen 4 for symmetric configurations was so small that pretty much no one noticed, so it really shouldn't be a surprise that Microsoft didn't bother to fix it. Intel forced their hands because all of their parts had 2 radically different types of core in them, whereas even AMD's heterogenous designs with Vcache (which have only been around for less than 18 months and are far less common since most gamers are going for single CCDs) will work just fine with scheduling misses, they'll just be slower. And the issues with the scheduler for symmetric dual CCD designs have only just now come to light pretty much today as far as the public is concerned.
@@bosstowndynamics5488 Why was the penalty so low before Zen 5 and suddenly so high now? Zen 5 should be pretty much the same what you call "symmetric" design as previous Zen architectures. What changed?
@@bosstowndynamics5488 All these years. Even first Zen had two CCXes. Now CCX is 8 core CCD but not always. Ryzen 7 370HX has 2 CCXes, one is 4 core Zen 5, second is 8 core Zen 5C. First Ryzens had 2 four core CCXes in one CCD. Ryzen 2000 was the same. Ryzen Threadrippers of 1st and 2nd gen not only had 2 CCXes per CCD but also used multiple CCDs. And ever since Ryzen 3000 launched, the basic configuration of desktop CPUs wasn't changed. It's still 1 IOD and 1 or 2 CCDs. TIt's been almost 5 years since 3950X was launched and like 7 or 8 since first Ryzens debuted. Intel launched Alder Lake three and a half years ago. And it's not the first time both AMD and Intel suffered because of M$. BOTH companies had to write their own drivers to improve scheduling on their CPUs because despite their work and continuous push, M$ refuses to make WIndows scheduler better. Even after publicly committing to making a fix. AMD integrated their driver within chipset driver and uses GameBar to assign game to the correct cores. Intel wrote APO doing just the same thing but, afaik just without the help of GameBar. If a Linux shows improvements and Windows doesn't while Linux doesn't even need the software hackery, it's safe to say the hardware is not at fault, nor is hardware vendor.
Great video and a great technical review, not just running benchmarks, the majority of review channels are more superficial and less technical. I remember when you said the performance of 1st gen threadripper may not be a problem of AMD but a Windows problem because in Linux it was ripping everything.
Good stuff! I'm still on an Intel i7-6700 PC build, so anything would be a nice upgrade at this point. I do video/photo editing, streaming, and adjacent creative stuff more than gaming. So far the Ryzen 7900x price wise looks a lot more appealing and I hope it continues to get cheaper.
I still side with with Wendell on this one. There's something errantly wrong with windows . Even the x-elite is a letdown mostly because of Microsoft . Only time will tell lol i did the right thing and bought the R9 12cores for 280$ brand new one month ago.
I've got a hunch that the improvements that have been realised on the ARM side in terms of efficiency are probably at least partially due to Microsoft having to strip out a lot of 30 year old junk and could potentially be realised on x86 too if they got their act together and made Windows work properly instead of focusing on creative ways to juice their metrics and shove ads in users' faces
Still rocking a 12-core 1920X Threadripper in my homeserver. I love that platform. Seems like yesterday, like you said. If AMD ever launch a 24 or even a 32-core desktop processor, I'll make the jump. But for now the Threadripper lives on.
@@calldeltosellSteve @ GN & Wendell are friends. They view CPU & GPU from different view points. GN is a gaming channel. Level1Tech is an all around performance channel.
Almost certainly, and arguably was with the previous gen CPUs as well, though it'll always come down to Linux compatibility for any particular application. I wish Wendell would test Factorio. It's one of the few games that is heavily bottlenecked by memory performance, much more than CPU/GPU. Its (native!) Linux build has always run better than Windows, for a variety of reasons - the most compelling being Linux's support for Large Memory Pages.
After jumping ship from MS advertising and spying platform I was shocked to see that Linux Gaming is now not only viable but often on par with Windows running through proton. I find it not at all surprising that Windows gets outclassed by Linux on these new processors, as Linux is an actual operating system with a strong focus on performance as opposed to tricking the user into tracking its users and sending them ads. Where you put your dev efforts matters. (Also Windows is a house of fucking cards)
interesting comparison at the beginning, Zen 5 really starting to feel like Zen 1 from the past, it's something that will be better in future generations, Zen 6, 7, etc.
I find it interesting that the 9950x beats almost everything else in minimum (0.1%, 1%) frames in most benchmarks. That's interesting, and something I would like to understand more. I wonder how much smoother that feels.
Thank you for the comprehensive look-see, Wendell. 🙏🏼 Your views on these new parts (together with those of _Hardware Busters'_ Aris) are a *refreshing* (see what I did there?) departure from all the _weeping & gnashing of teeth_ that I've borne witness to on RUclips lately. 👍🏼 P/S: The check is in the mail. 🫰
I do agree that Intel giving up on AVX-512 was a big shame, I rarely give AMD credit but I do commend them for keeping it around. What many dont realize is that AVX-512 isnt just the extended registers and vector length, but it also comes with strong optimizations for previous instruction sets like AVX2 or older, which many apps use. Sadly AVX10 will just be a band aid.
If you want to max out the RAM on AMD platform, going with the ECC is probably the best way to do so. 48Gb ECC UDIMMs are available and the prices are decent at $200-250/ea. I mean, with that much RAM, random bit flips from the background radiation are inevitable.
Depends on what you're doing, the extra bit flips from large DDR5 sticks are supposed to be handled by the ubiquitous on die ECC anyway. Of course, not many workloads actually benefit from that much RAM so you're already selecting for things like home servers so full ECC is probably still a good call, but it's not mandatory for all high memory systems by any stretch
I'm still slumming it on an X99 platform, running everything, all those games plus more just fine. What helps is a 4K TV as you can use a desktop window mode for gaming, the size of the desktop window is similar to a large monitor (eg. 1920x1080p window is 27 inches).
Great video, but it doesn't consider price. In some of these benchmarks, the i7 is actually outperforming the 9950x. So why wouldn't anyone want a i7-14700K at half the price?
Thanks for another look at the zen 5 :-). The nice follow up would be to dive deeper into new zen5 microarch Vs win 11 basically to show/explain the current seemingly strange benchmarking results ... and to see what could be done to win 11 to use zen5 architecture fully.
If you want to run very high memory clock/timings, keep in mind that your memory will degraded overtime if you running at high voltage (even it stamped to your EXPO and XMP profiles that they designed to run on that voltage). You can run DDR5-6000-8000 at 1.35 or1.4v but it may not able to run at that speed after 2 years because it degrading very fast. Some Mainboards also inject higher voltage than you set in the BIOS. I found some ASUS boards giving DIMM 1.38v instead of 1.35v setpoint.
For the memory, I had problems with 2x2 sticks on my AM4 with my 5800X3D, the system would not boot AT ALL, if a specific stick was not in the right slot. Once booted, I could apply the 3200Mhz without problems. They layout is like you show, each pair on the same channel ( and there's a specific order, i lost hours trying to understand, and just bruteforce all the possibilities until it worked), unlike what's specified in the manual.
this channel brings meaningful review for me.. 1. linux based testing for PRODUCTIVITY use case. 2. non-gaming and balanced summary to tell us if it's something worth considering. I'm planning to upgrade from 3700x to the 9700x
The reason that 4K is not commonly tested in CPU reviews is because 4K is primarily GPU-bound and GPU limited. You want to create a CPU bottleneck at 1080p with the GPU maxed out with no upscaling which allows different CPUs to be easily compared
Just bought 9950X and an MSI X870E MPG Carbon WiFi motherboard and the USB disconnecting issue still exists/has come back, meaning the platform is unusable.....yes, the one that surfaced over 3 years ago when X570 was launched! Such a pain, an no influencers/tech sites are covering or mentioning this so would be good if influencers got together and held AMD and motherboard manufacturers accountable for this!
Thanks alot for the unbiased review. You brought up alot important subjects, us mostly none gamers realy likes hearing about. Like what to expect on DDR5 on 4 channel setup. I am a developer, so i use my system mostly as a server. Currently i have a 5950x, which is a fine little beast for this. But i see it being constrained on memory (DDR4) bandwith, when i start to load the 16 cores up, in my applications. I tested with a 7950x3D, and the way my programs allocate mem/cpu cores, doesnt benefiths this alot vs the 5950x. Therefor i hoped the 9950x woulld be a worthy upgrade. I had hoped we had seen a 8xZen5+16xZen5D cores edit, but seems that was a dream not coming true, before Zen6. Likewise i had hoped for a IO die with 1GB+ L4 cache to be shared with gpu/cpu, bit like broadwell, intel cpu... but no. So even Zen5 is both faster and cheaper than Zen4, i guess its going to be another wait untill Zen6 gets out... Again, thx for review.
yes u should get a:1 ccd chip aka 9700x or b:9800x3d bcos u dont really need tons of cores they just need to be fast and 9950x still has latencys between ccds and one is actually running I think 300,400 mhz higher than other..in short its waste and useless for gaming I mean its ok but ther would be no difference using 9700x in fact with 350€ left in wallet u could get better gpu witch would get u more...so yeah 9700x but if I think more nah..on long run pay 150 more and get 9800x3d
@@aelderdonian that statement is generally true if the update is promised by the same company that sold you the hardware, where the company is not financially incentivized to provide you software updates down the road, but we are talking about Microsoft that needs to fix their OS and specifically their scheduler, AMD has already done their job and their hardware works fine on Linux, it's 100% Microsoft's responsibility to fix their shit here
but microsoft would have several antiviruses runing on each core - the searchengine in windows 10 are as slow as running cyberpunk on a singlecore cpu and has been the last 8 years - I guess it has a conference with microsof if im legit to search on the harddrive
This CPU generation is getting some of the most different opinions across the lineup of critics I've seen in a while. Steve: "Meh." Linus: "Cool!" Level1Techs: "Get it for an upgrade of three years or more." Hopefully the X3D ones show a better, more obvious improvement. My opinion is that you just don't get the performance for the value at all. I guess I'm mostly with Steve (GN).
I appreciate this review. I'm someone who just started buying my parts in preparation for this CPU. I'm coming from the Intel i7-7800X and desperately need a new CPU, but when the rumors started circling around about the 9000's I decided to delay building a new PC. Seeing all the doom and gloom around it was a little concerning, but I mostly use Blender/productivity software so seeing the boosts there had me pretty happy, gaming is secondary, so as long as the game runs better than what I currently am dealing with then it's good for me. Though I gotta say I've been thinking of checking out Linux for a while now since I don't agree with the crap they are pushing into Windows. Maybe I'll do a dual boot so I can check it out.
If you ask me, the most valuable info in this video was the little "RAM break" talking about how and which ram runs, since Ryzen CPUs basically "stand or fall" with the RAM choice.
I'm running a 2950X as a streaming PC, so I have some PCIe cards in it for capture, and there might be other things soon. The 5600X in my gaming PC isn't THAT far away from it in processing power, and it's fascinating. For processing power, I could definitely go with many AM4 and AM5 CPUs for the streaming PC, but I'd lack the PCIe lanes, so any upgrade will be very expensive.
In regards to administrator providing better performance, this has been known for many years amongst the game cracking scene. Many cracked game installers will run the game as admin by default. There's also other reasons for this, like trying to reduce problems people may encounter. But this can obviously be abused by dodgy files. Running as admin certainly wont provide performance benefits with every single game, just occasionally. I've no idea why this happens, but it's been a thing for a very long time.
finally a comprehensive review. Gen-on-gen improvement is unimpressive, but fingers crossed it's a stepping stone to something better (10050X perhaps?)
Imagine windows being the problem and its neutering performance. I'm glad someone is saying something. No one else seems to have brought this up until you did.
I got Rocket Lake because of AVX-512 for RPCS3. Love my 11400, but even it can light on fire with a Z board and full limits removed, especially with AVX-512. 150W limit for that chip, 175W burst for my twelve year old Hyper 212 Evo (with new fan) before I get uneasy with temps.
Hey Wendell, after reading about the "Administrator" account performance gains, my thoughts went directly to the large page support. It's locked behind a security group policy AND requires to run the exe elevated. The GPO is under Computer configuration -> Windows settings -> security settings -> Local policies -> user rights assignment, called "Lock pages in memory". Linux iirc has large/huge pages support by default This was/is commonly used for mining applications where the gains are pretty similar, now I know there are some stability implications with memory allocation, I can't confirm if that's the main difference from the disabled Administrator account Since I don't own a 9xxx series, maybe you could test if this is the setting that gives that extra performance?
It's a great take with solid hypothesis on why Zen 5 isn't performing as it should. Anandtech did a great job at analyzing inter-core latency, maybe it's not the fact that they used the same cIOd they used on Zen 4, maybe it's windows 11 scheduler that isn't supporting this new uArch properly yet?
5:32 "PBO doesn't add anything" this is not true. You have simply toggled it on without tuning it. using Curve Optimizer will result in the same watts as stock and yet better performance and lower temperatures than just PBO. I wish people would use the tools given by AMD to benefit themselves. It's just odd to shoot yourself in the foot especially as an enthusiast. Leo from Kitguru has demonstrated this.
Interesting point that you make about Linux vs. Windows game performance. I have been seeing hard evidence that GPU performance (NVIDIA) is better on Linux machines over windows for AI oriented tasks, like embeddings, for example. My theory is that that CUDA driver performance is heavily optimized for linux due to heavy usage in Transformer model training by the likes of OpenAI, etc.
One thing I haven't seen: If you use something like Proxmox, NUMA correctly configured to use a correct CCD and no SMT passed on to the Windows VM, how does the game work? What about Linux? I find that sometimes you can tweak KVM to do stuff that can get pretty close to optimal performance in baremetal. Thanks for the great work!
Watching this, hearing you say Windows is doing something weird, and having just seen Hardware Unboxed's video about what I expect is the same issue is interesting.
I’d like to see more about how Windows could be the problem w/ consistency, since Jayztwocents was stating there was wacky behavior between Intel & AMD non-3D processors, AMD 3D models were performing similar to Intel’s offerings. Jay was wondering if it’s how games are handling the multi chip architecture of the 3D models vs their non-3D counterparts, think single socket vs dual socket systems as an example what thought line he’s going with. My thought after your video is, what if Jay’s theory should be in relation to the CPU & Windows vs the CPU & games. As an unrelated side note, I’d double check your test result slide titles since there are few that the title said 1080P when you were showing the 1440P & 4K results. Appreciate the review!
While gaming is undeniably fun, these powerhouse 16-core processors are built to unleash your full potential beyond just gaming. They're designed to handle the most demanding tasks like professional video editing, 3D rendering, and complex simulations, empowering you to create and innovate without limits.
Windows is doing something strange. It's busy trying to load adverts into everything.
Everyone needs a threadripper to load all the ads !
I can just de-bloat and strip down Win11 when update support ends for Win10 next year, right?
@@handlemonium Why bother , just run a decent OS.
windows what's to see where you are going in the games to advertise they plan on putting ads in the new games.
@@andljoy Doesn't really help when your productivity software is Windows-only.
I'll stand by this: asymmetrical CPUs where simple core scheduling can cause a massive negative impact is simply terrible CPU design. If you want to save power or costs then just buy a cheaper CPU - and you also get consistent performance on top for free as well.
Thank you for the nuanced review of Zen 5, I really appreciate the Linux experience in addition to the Windows experience since I dual boot both.
Did I mention that I use arch linux ?
I'm now a pure Linux user on my desktop. Gaming with Bazzite. Best experience since Windows XP. 😁
I say "desktop" because I use a MacBook Pro as well. The macOS we know today was born from Linux (NeXTSTEP), so I guess I've just favoured that sort of environment over the past few years.
@@eQui253 Nowadays NixOS is the new Arch Linux. I use NixOS, btw.
@@eQui253 nice, I’m using Debian 12. Arch Linux is pretty nice although I wanted to slow down the updates especially since I wanted to use ZFS and not have an update break anything.
LFS is also a bunch of fun to compile. :)
@@Fractal_32 I don't use Linux.. it's a meme.
As a game dev I can say that by default, windows can schedule the “main thread aka mainloop” multiple times per frame on multiples cores, same for any worker thread. Moving to another core is either costly directly : you have to copy all the registers, internal registers, stack, and the cache l1 , partial or total of l2 and l3,(and caches size is getting bigger and bigger, that’s a lot to copy), or do it “fast” and let the process ask for memory : copy all the registers, internal registers , stack, do not copy the cache, letting the process ask for a memory address get a cache miss, filling the caches for that call, making the first execution of the process on a new core, very costly than stay at least one frame, and better many frames on the same core.
On console I always pinned all my threads to a specific cpu, on windows, yes you can see some gain, by pinning a thread to a cpu, but you could also see that the lowest fps is also getting lower, in some case much lower.
The problem, windows is not a console. On a console we get a guarantee that a list of core are for the game, and that no system thread would run on those cores. On windows, you can pin your thread to a core, and run your mainloop as fast as you could, it works, for some time, until windows decide that your thread did run long enough, and interrupt you as soon as you make a system call , and that call, instead of lasting 80ns as usual can last 100, 400 ms , because windows decided to use “your core” for something else, trashing the cache at the same time. Cohabitation is possible, but it’s very very hard, and when you think you have the solution, you find that sometimes, when you start a thread, sometimes it start only 100ms later… those are the usual lags and stuttering in windows game. So thread pinning = good, on windows, it give some gain, but also some loss, that could be bigger. This is the reason most musicians prefer macOS, the thread scheduler, not having those 100ms lag than an app under windows could have. 100ms for a game or for midi sounds or wave, it’s a lot. And the cure, it to had a one or multiple frame delay, fixing frame rate but introducing lag.
TLDR windows thread scheduler = bad
Generally speaking, do you think there's a notable advantage/disadvantage to scheduling in Win11 vs Win10?
@@JJFX- I still need to try Win11.
PS: The "solution" on Windows is to not CPU starve the OS
So, give the OS time to do whatever he wants, but at a time that "suits" the game engine.
Usually by adding a simple sleep(1); The 1ms, in this case, is not guaranteed; it depends on the OS,
The more starved the OS is, the longer the sleep(1) is.
So, the game manages the OS by giving it a percentage of its time. (upside down world) It's not perfect or "efficient," as you lose "compute time," but it works.
I have yet to try "for science." There are system functions to change if a core can be used by Win! What happens if I transform my PC into a quad-core for Windows and all the rest just for my code to profile actual performance?
Does thread priority have any affect on this problem? Does the realtime priority fix it at all?
I know nothing about this stuff just curious
@@tiedye001 Same as CPU Affinity; you get more time, but you "piss off" more threads with an "important" job, so when the OS can interrupt you, the lag can be violent. Thread priority is suitable for a 3/4 ms task; for example, if you want it done fast, it's not ideal for constant performance 100% of the time.
This is why a console can beat a more expensive PC by 20% to 30% between a similar CPU and the same frequency (Rainbow 6 Siege), between Windows and a PS4, for example.
I am so here for your analysis of this scheduling issue, i would really like to know wtf is up with windows scheduler and this seems an excellent case in point. All my machines run ProcessLasso, because i have to tbh for audio
Take your time on it, ima watch the whole thing XD
A couple of my audio plugins can use AVX-512. For example, 2CAudio Breeze which is a nice reverb. I hope Zen 5 encourages audio developers to start using AVX-512 more.
The problem with AVX-512 on the CPU is how likely is it for a GPU to not run AVX-512 infinitely better than the CPU?
@@budthecyborg4575 GPU isn't great for audio because of latency
@@HolarMusic Good to know.
The only application I have for AVX-512 right now is the Topaz AI upscaler and GPU performance is an order of magnitude better than CPU.
@@budthecyborg4575As far as I'm aware AVX 512 workloads are still very much in the high complexity territory that CPUs beat GPUs at. GPUs tend to be best for stuff that can be broken down into a lot of very, very small tasks (or AI, but that's in no small part because of dedicated AI accelerator hardware)
@@bosstowndynamics5488AI inference is for the most part just multiplying your input with all the weights and biases in the model and AI training is just fancy matrix calculus, which both runs really nicely in parallel
Phoronix just released his review and the numbers on Linux are just incredible. Too bad the only thing we are going to hear is how bad those are for gaming...
He came with a Geomean of around 17.5% over the 7950x.
The loudest crowd most of the time is the smallest (Gamers)
@@ThaexakaMavroand most are gamers that wouldn’t have even bought one of these in the first place, but le circlejerk
That's the thing, the average results are heavily carried by avx 512. Nothing to do with linux/windows.
I can flip it and say that 20 to 40% perf increase from double execution units width is just pathetic... Should be at least 70 to 90%
@@panjak323 Which makes me wonder, was zen 4 just a homerun or zen 5 is unmatured/poor architecture?
@@panjak323 you clearly didn't read the review and are trying to spread FUD, and you don't understand geomean. The graphs include AVX workloads removed and it's still way up. God the comment section is so brain dead.
Best review I’ve seen on this, answering real questions about utilization and optimization. Not creating a story out of a handful of benchmarks
Blame HUB and Gamers Nexus and all of their cronie tag along creators that dote on their every word, for that kind of rubbish. It's not all about benchmarks, it's about real world utility and performance which they fail to understand for a 12-16+ core CPU. Reviews from people with real expertise are much better.
I'm tired of people thinking benchmarks matter when someone pairs a 4090 with a $200 cpu
One of these days people maybe get they are just amateurs wannabe standard just because one has a lab and another overbenchmarks like there is no tommorow.@@moonstomper68
@@tuckerhiggins4336 The reason you'd benchmark a $200 CPU using a rig with a 4090 is because you don't want it (and all the other CPU's as part of the testing to compare it to) to be constrained by the GPU.
If you tried benchmarking a $200 CPU in a game and used a $200 GPU as well, all the CPU results on the higher end of the chart are going to cap out at the same framerate because the GPU is the bottleneck.
The benchmarks are meant to compare one CPU to another using the same environment variables in order to show relative performance and value, not to show "how fast does this game run on my PC if I buy that CPU?"
Thank you Wendell for an amazing and truly honest review.
This is Linux vs Windows on Threadripper 2990WX all over again. And every outlet other than Wendell or Phoronix fail to consider a software issue.
Agreed. As much as I love GN and HUB I'm getting tired of them ignoring Linux. (GN shows Chrome compilation is a good step in the right direction.) When SOME people such as Wendell or Phoronix ARE seeing a performance uplift in Linux then *we should be asking:*
_Is the Windows task scheduler problematic for Zen 5?_
My 3950x still does me pretty good. But I don't do much with it.
Exactly, I mentionned it on hub and their response was to re test on Windows not changing shit or bothering to investigate the issue at all. Like guys, isn't this your job?
@@putneg97 100%.
It is almost ironic that Linux (open source) is keeping Windows (closed source) benchmarking honest. =P
Considering Linux performance is a great idea, but AMD should not release a desktop product unoptimized for Windows without warning.
Michael Larabel at Phoronix loves them. Particular attn, the avg scores, like avg of all creator, avg of all database, avg of all games ... About the only thing the 9950 doesn't mop the floor with, is power efficiency. The lower 9ooo series do rather better there. There are a few outliers where the intel 14th gen does stunningly well, but those are outliers.
Compiler performance is stunning!
I'm guessing the 9950x can shine on power efficiency if power limited, maybe set tdp to 105W which would effectively be an eco mode on a 170W default tdp. I want to see those benchmarks.
The main issue is memory bandwidth. If your workload uses avx512 in particular an all core workload probably easily saturates 16 cores, it likely saturates 8 cores, it's funny that 6 core zen5 may be the most sensible way to run these types of workload.
Linux Reviews from this channel and Phoronix Review clearly show a major difference between the 9950X and 7750X, where as tech sites and channels using windows to review the CPU's show the gains to be marginal or even to have regressed in some cases.
This sounds like a windows issue more than anything. Windows really sucks.
Most Benchmarks here come from Windows too, as it seems, its all mixed uo unluckily
SO, the most widely used platform thats been out for years.. and a new piece of hardware comes out where the hardware company makes the drivers.. and its the OS fault?
@@1DigitalFlowno it's not the OS' fault. It's AMDs fault
@@InfernoTrees I know that.. that's why I am replying to a comment that says "windows sucks"
@@1DigitalFlow oh I know I'm trying to back u up xD. Probably could've come off better but, windows does suck ass, but jfc AMD did NOT cook LOL
I’m not a gamer, just a photographer and fine art printer. So, even with all the bad news and confusion re the Ryzen 9000 series, I decided to go ahead and replace my 7900 with a 9900X anyway. I have an MSI MPG B650i EDGE WiFi motherboard in an open case with an ID-Cooling SE-207-XT air cooler to which I have attached a second fan. The 7900 runs with DDR5-6000 memory, Game-Boost and the TDP elevated to 105W. It pulls 142W running Cinebench R23 with a high temperature of 79C for a multi-core score of 27550. When I replaced the 7900 with the 9900X, with the BIOS cleared except for EXPO, Cenibench R23 scored it at 32214 with a high temp of 81C. That’s a 17% improvement, much better than I expected. Think I’ll be keeping the 9900X and seeing how much I can wring out of it with a better cooler. I have no intention of playing the core parking game. BTW, B&H is selling the 9900X for $50 less than everybody is reporting. They also pay the tax if you use their PayBoo card. No connection to B&H, just a happy customer for many years. Some good news re the 9000 series is overdue. I appreciate that gaming drives the technology, but other users make up a large part of the marketplace, so it seems wrong for gaming to influence the whole picture. Even though AMD fumbled the rollout, the 9900X is still a great CPU for non-gamers.
@JEHendrix what cooler are you using for the 9900x ?
@ - Thernmalright Phantom Spirit 120, dual tower 120mm. Good up to 180W. I’m overclocking to that power level and getting 34000 multicore on Cinebench R23, 2222 single-core.
I would've liked to see 7800X3D benchmarks in the gaming section. It is the gaming king after all.
Spoiler it's still the gaming king, you wanna game on ryzen still buy a 7800x3d.
They're probably waiting for the 9x3d to come out to make those comparisons
This CPU doesn't replace the 7950x3D.
just imagine it above whatever is on top in any of the gaming charts.
To be fair, this isn't really a video about gaming CPUs, that section is more about gaming on server-capable and workstation CPUs.
great review, thanks. very level headed and no hyperbole and unnecessary drama. Too many youtubers are concentrating purely on gaming and not as a whole package. I agreed with your conclusion, spot on.
I believe some reviewers cater to a younger audience, and I find the lack of professionalism off-putting. Being on RUclips doesn't necessitate clickbait for reviews. This is why I've mostly returned to reading website reviews, such as those on guru3d.
@@ThaexakaMavro Chips and cheese should give a more in depth architectural analysis of zen 5 and shows regressions in several instructions as well as memory. Memory regressions alone would explain the lower than expected gaming performance.
Here's something we're seeing in a test of the 9950X compared to 7950X, which we did as a ProxMox test. Inside of ProxMox hosts, we're seeing a significant improvement vs. at the wall power utilization. But in Windows itself, when non-virtualized (not using a QEMU CPU) the benefits are not being felt in any real mode; we don't really test for gaming. Now, obviously, we're comparing a point that is below straight to hardware to hardware virtualized. But the benefit does show up, and that is interesting.
Microsoft is more concerned with making money off their customers than optimizing performance on so many fronts. Gaining market share and taking the hit to their brand in windows is not on their radar when Ai and azure is making the shareholders purr.
Microsoft doesn't write chipset drivers, Microsoft doesn't write the microcode for AMD CPU.. so why are you talking about Microsoft?
@@1DigitalFlowBecause Linux does a better job, Zen 5 performance is better on Linux
Because as those chips are dissapointong this youtuber had to blame it on windows and this is now narrative for all the fanboys.
@grimfist79 He literally called out that, as a gamer, it's not wise to upgrade from Zen 4 to Zen 5. But there are more workloads than just gaming. You need to stop pretending that everyone who disagrees with you is a fanboy. He said he found some performance discrepancies that he needs to investigate further. If anything, he's more anti windows than pro AMD
He thinks the user is the customer 😂. It's the product!
Thank you Wendel . Really looking forward to Productivity review of these CPU's
Java don't have print thing some random glich appears if i solve those issues that the same
Zen5 may well have potential, but I chose to follow Wendell's advice to buy a CPU based on how it performs _now_ rather than on some nebulous possible future capability. This morning, after checking the last round of Zen5 reviews, I bought a 7800X3D. It will anchor my main gaming PC for the next 2-3 years. Maybe Zen6 will make good on Zen5's unfulfilled promises.
Yeah. I bought a threadripper 7970X after zen 5 was delayed because at that point zen 4 was rock solid, I'd been on a 7950X for 18 months and I just needed more for my work.
Had no idea you had a Linux specific channel. Glad I RUclips just happened to show it under this video. I was confused where the Linux video was.
4 months of no Windows on my gaming pc. Good to see the performance is doing well and will be interesting to see how your testing goes.
First! And the first ryzen9 reviews I've seen! That admin/windows trick thing makes me think in a couple months from now these processors will be a little bit better once either people or amd/windows figures their stuff out. Also still makes me excited for the x3d variants
Microsoft is always late to do anything for amd i know conspiracy lol
Here's an idea for benchmark: Game + OBS streaming (or recording).
Because many workloads today aren't just the game running by itself so I wonder how would it all look like in scenario like the one above.
For couple of months now I play on Windows with SRV-IO and IOMMU turned OFF, along with Memory Integrity and Core Isolation also turned off. The difference in 3DMark results and gaming experience are confirmed for the better. I would like to note, this is my gaming only machine, I do have another one for AI LLM testing with 7900 (no X) where all these options (in BIOS at least since there I run Ubuntu) are turned on. But it surprised me that turning those OFF for gaming, made by 3DMark results better, and my gaming with Freesync enabled monitor with vertical sync limit to 120HZ on 5120x1440 rarely drop in CoD MW3 multiplayer (even for a second or two). My GPU is 7900XTX from Sapphire.
This is interesting, I haven't heard about SRIOV and IOMMU affecting performance. You're not running Windows in a hypervisor? Do you have a ballpark estimate of how much impact it had?
@@DanielKennedyM1I suspect that the entirety of the difference came from the software side - SR-IOV isn't even available on consumer hardware and the UEFI option is really just whether or not to let network cards report to the OS as multiple separate devices, and enabling the IOMMU doesn't change any of the performance characteristics of the chip. On the software side though, core isolation and related features mean that W11 kind of sort of does run on a hypervisor, by default, because it uses Hyper-V to sandbox some of the drivers and other system components to prevent privilege escalation. It's honestly one of the few ideas in W11 I actually like (assuming it's actually securely implemented), brings some of the security by virtualisation stuff from niche systems like Qubes to mainstream users, but naturally that does come with a bit of a performance overhead and that difference is enough that gamers who don't understand the security implications wind up disabling it (or of course power users who don't run sensitive workloads on their gaming system like the above commenter who knowingly take on the risk)
@@DanielKennedyM1 Yes, no hypervisor, disbled srv-io and iommu along with windows security options mentioned…
I'm wondering if windows is simply using Intel code hence why everything seems to run faster on Intel?
I mean, Intel and MS have always worked together (wintel name exist for a reason) and the OS is favoring Intel .
Time to recreate all these benches using Linux!
I'm not really a huge fan of tin foil heading but I do think this is somewhat true. Intel and Nvidia have worked many times together with Microsoft. Especially Nvidia has the reputation to be in frequent contact. Meanwhile AMD has quiet the reputation of treating their software partners very poorly, so this might actually be a thing.
As usual Wendell is right 👍... I just got back from the future & the memory anomalies are resolved...
Looks like Zen 5 is alive like Johnny 5! Haven't thought about Johnny 5 in decades. Nice to see it on the desk today.
Thank you Wendell for all the work. Great review and personally I love to see someone who's leaning more on the relative than the absolute side of conclusions.
Good review, but the title pic wasn't covered - maybe a future video could compare these to Threadrippers? Thanks!
I can't believe after all this years Windows scheduler still does not fully support AMD's dual CCD layout so that AMD has to resort to crazy hacks with game bar and drivers to basically shut down half of the CPU you paid for to avoid performance degradation caused by suboptimal scheduling.
All these years? All these 1 years? Zen 5 is the first time AMD has had any need for special scheduling for symmetric dual CCD chips, prior to this it was only the 7900x3d and 7950x3d that needed advanced scheduling and most x3d buyers were going for the 7800x3d anyway. Yes, Microsoft absolutely should have fixed it (particularly since Linux already accounts for CCDs apparently) but it's not like it's been many years of wide deployment of the core parking stuff
@@bosstowndynamics5488 "symmetric" does not equal monolithic die where all cores have uniform access to caches and memory.
As each CCD carries its own caches having threads reassigned between them randomly, or threads of a single process running on multiple dies incur significant penalties due to cache misses and synchronization.
Dual CCD design first debuted in Ryzen 3000 series, or even earlier if you count 1st gen Threadrippers.
So yes, all these years and Windows still suffers from these problems. I'm not sure how Intel does it with their non-uniform core designs, but it seems they were able to convince MS to care at least somewhat.
While AMD's best effort is to hook up into game bar logic and disable half of your CPU.
@@MikeKrasnenkov Saying Windows "still" suffers from these problems is misleading - the performance penalty from communication between CCDs in everything up to Zen 4 for symmetric configurations was so small that pretty much no one noticed, so it really shouldn't be a surprise that Microsoft didn't bother to fix it. Intel forced their hands because all of their parts had 2 radically different types of core in them, whereas even AMD's heterogenous designs with Vcache (which have only been around for less than 18 months and are far less common since most gamers are going for single CCDs) will work just fine with scheduling misses, they'll just be slower. And the issues with the scheduler for symmetric dual CCD designs have only just now come to light pretty much today as far as the public is concerned.
@@bosstowndynamics5488 Why was the penalty so low before Zen 5 and suddenly so high now? Zen 5 should be pretty much the same what you call "symmetric" design as previous Zen architectures. What changed?
@@bosstowndynamics5488 All these years. Even first Zen had two CCXes. Now CCX is 8 core CCD but not always. Ryzen 7 370HX has 2 CCXes, one is 4 core Zen 5, second is 8 core Zen 5C. First Ryzens had 2 four core CCXes in one CCD. Ryzen 2000 was the same. Ryzen Threadrippers of 1st and 2nd gen not only had 2 CCXes per CCD but also used multiple CCDs. And ever since Ryzen 3000 launched, the basic configuration of desktop CPUs wasn't changed. It's still 1 IOD and 1 or 2 CCDs. TIt's been almost 5 years since 3950X was launched and like 7 or 8 since first Ryzens debuted. Intel launched Alder Lake three and a half years ago. And it's not the first time both AMD and Intel suffered because of M$.
BOTH companies had to write their own drivers to improve scheduling on their CPUs because despite their work and continuous push, M$ refuses to make WIndows scheduler better. Even after publicly committing to making a fix. AMD integrated their driver within chipset driver and uses GameBar to assign game to the correct cores. Intel wrote APO doing just the same thing but, afaik just without the help of GameBar. If a Linux shows improvements and Windows doesn't while Linux doesn't even need the software hackery, it's safe to say the hardware is not at fault, nor is hardware vendor.
Great video and a great technical review, not just running benchmarks, the majority of review channels are more superficial and less technical. I remember when you said the performance of 1st gen threadripper may not be a problem of AMD but a Windows problem because in Linux it was ripping everything.
Good stuff! I'm still on an Intel i7-6700 PC build, so anything would be a nice upgrade at this point. I do video/photo editing, streaming, and adjacent creative stuff more than gaming. So far the Ryzen 7900x price wise looks a lot more appealing and I hope it continues to get cheaper.
My favorite review. For my uses (with "gaming" about #9 on my list), this review was what I was looking for. Keep up the good work!
I still side with with Wendell on this one. There's something errantly wrong with windows . Even the x-elite is a letdown mostly because of Microsoft . Only time will tell lol i did the right thing and bought the R9 12cores for 280$ brand new one month ago.
I've got a hunch that the improvements that have been realised on the ARM side in terms of efficiency are probably at least partially due to Microsoft having to strip out a lot of 30 year old junk and could potentially be realised on x86 too if they got their act together and made Windows work properly instead of focusing on creative ways to juice their metrics and shove ads in users' faces
Still rocking a 12-core 1920X Threadripper in my homeserver. I love that platform. Seems like yesterday, like you said.
If AMD ever launch a 24 or even a 32-core desktop processor, I'll make the jump. But for now the Threadripper lives on.
Except that thing gets destroyed by both Intel and AMDs current mainstream desktop flagship chips
Given the title was hoping for some benchmark comparisons to the threadripper chips, 3970X etc
same
longest week of my life, waiting for this review
Thanks for going over the 4x dimm stuff. Very very helpful.
Damn havent seen this Channel for a while. Makes me happy to see that Wendell has alsmost half a million followers now.
Wow you’re early Wendell - faster than GN and HUB! 🎉
By a couple minutes. GN went up right after.
He's more credible than GN to me. Far more authoritative and 1,000 times less self-impressed than GN. L1 talks to you. GN tries to talk down to you.
@@calldeltosell well said. Steve seems to be on his Louis arc for some unfathomable reason.
@@calldeltosellSteve @ GN & Wendell are friends. They view CPU & GPU from different view points. GN is a gaming channel. Level1Tech is an all around performance channel.
Well, note how tired he looks... I bet the Windows shananigans kept him from sleeping...
Conclusion: Windows is a mess. Is gaming on Linux advantageous over Windows with these new processor in total?
Almost certainly, and arguably was with the previous gen CPUs as well, though it'll always come down to Linux compatibility for any particular application.
I wish Wendell would test Factorio. It's one of the few games that is heavily bottlenecked by memory performance, much more than CPU/GPU. Its (native!) Linux build has always run better than Windows, for a variety of reasons - the most compelling being Linux's support for Large Memory Pages.
@@DanielKennedyM1 interesting. Thank you.
They smoke on well optimized proton games from what I hear.
@@DanielKennedyM1 GLIBC_TUNABLES=glibc.malloc.hugetlb=2 got me 20% extra UPS in factorio lol.
After jumping ship from MS advertising and spying platform I was shocked to see that Linux Gaming is now not only viable but often on par with Windows running through proton.
I find it not at all surprising that Windows gets outclassed by Linux on these new processors, as Linux is an actual operating system with a strong focus on performance as opposed to tricking the user into tracking its users and sending them ads. Where you put your dev efforts matters.
(Also Windows is a house of fucking cards)
interesting comparison at the beginning, Zen 5 really starting to feel like Zen 1 from the past, it's something that will be better in future generations, Zen 6, 7, etc.
I find it interesting that the 9950x beats almost everything else in minimum (0.1%, 1%) frames in most benchmarks. That's interesting, and something I would like to understand more. I wonder how much smoother that feels.
Thank you for this insightful and nuanced review. Truly in a class of its own.
Thank you for the comprehensive look-see, Wendell. 🙏🏼 Your views on these new parts (together with those of _Hardware Busters'_ Aris) are a *refreshing* (see what I did there?) departure from all the _weeping & gnashing of teeth_ that I've borne witness to on RUclips lately. 👍🏼
P/S: The check is in the mail. 🫰
only good reviewer alive now a days! great work.
I do agree that Intel giving up on AVX-512 was a big shame, I rarely give AMD credit but I do commend them for keeping it around. What many dont realize is that AVX-512 isnt just the extended registers and vector length, but it also comes with strong optimizations for previous instruction sets like AVX2 or older, which many apps use. Sadly AVX10 will just be a band aid.
If you want to max out the RAM on AMD platform, going with the ECC is probably the best way to do so. 48Gb ECC UDIMMs are available and the prices are decent at $200-250/ea. I mean, with that much RAM, random bit flips from the background radiation are inevitable.
Depends on what you're doing, the extra bit flips from large DDR5 sticks are supposed to be handled by the ubiquitous on die ECC anyway. Of course, not many workloads actually benefit from that much RAM so you're already selecting for things like home servers so full ECC is probably still a good call, but it's not mandatory for all high memory systems by any stretch
I'm still slumming it on an X99 platform, running everything, all those games plus more just fine.
What helps is a 4K TV as you can use a desktop window mode for gaming, the size of the desktop window is similar to a large monitor (eg. 1920x1080p window is 27 inches).
Great video, but it doesn't consider price.
In some of these benchmarks, the i7 is actually outperforming the 9950x.
So why wouldn't anyone want a i7-14700K at half the price?
Thanks for another look at the zen 5 :-). The nice follow up would be to dive deeper into new zen5 microarch Vs win 11 basically to show/explain the current seemingly strange benchmarking results ... and to see what could be done to win 11 to use zen5 architecture fully.
If you want to run very high memory clock/timings, keep in mind that your memory will degraded overtime if you running at high voltage (even it stamped to your EXPO and XMP profiles that they designed to run on that voltage).
You can run DDR5-6000-8000 at 1.35 or1.4v but it may not able to run at that speed after 2 years because it degrading very fast.
Some Mainboards also inject higher voltage than you set in the BIOS. I found some ASUS boards giving DIMM 1.38v instead of 1.35v setpoint.
For the memory, I had problems with 2x2 sticks on my AM4 with my 5800X3D, the system would not boot AT ALL, if a specific stick was not in the right slot. Once booted, I could apply the 3200Mhz without problems.
They layout is like you show, each pair on the same channel ( and there's a specific order, i lost hours trying to understand, and just bruteforce all the possibilities until it worked), unlike what's specified in the manual.
this channel brings meaningful review for me.. 1. linux based testing for PRODUCTIVITY use case. 2. non-gaming and balanced summary to tell us if it's something worth considering. I'm planning to upgrade from 3700x to the 9700x
The only channel that covers 4k resolutions
The reason that 4K is not commonly tested in CPU reviews is because 4K is primarily GPU-bound and GPU limited. You want to create a CPU bottleneck at 1080p with the GPU maxed out with no upscaling which allows different CPUs to be easily compared
Just bought 9950X and an MSI X870E MPG Carbon WiFi motherboard and the USB disconnecting issue still exists/has come back, meaning the platform is unusable.....yes, the one that surfaced over 3 years ago when X570 was launched! Such a pain, an no influencers/tech sites are covering or mentioning this so would be good if influencers got together and held AMD and motherboard manufacturers accountable for this!
Thanks alot for the unbiased review. You brought up alot important subjects, us mostly none gamers realy likes hearing about. Like what to expect on DDR5 on 4 channel setup. I am a developer, so i use my system mostly as a server. Currently i have a 5950x, which is a fine little beast for this. But i see it being constrained on memory (DDR4) bandwith, when i start to load the 16 cores up, in my applications. I tested with a 7950x3D, and the way my programs allocate mem/cpu cores, doesnt benefiths this alot vs the 5950x. Therefor i hoped the 9950x woulld be a worthy upgrade. I had hoped we had seen a 8xZen5+16xZen5D cores edit, but seems that was a dream not coming true, before Zen6. Likewise i had hoped for a IO die with 1GB+ L4 cache to be shared with gpu/cpu, bit like broadwell, intel cpu... but no. So even Zen5 is both faster and cheaper than Zen4, i guess its going to be another wait untill Zen6 gets out... Again, thx for review.
Just installed the Ryzen 9950X into my board its insane 😊
"Dr Su, a third video has hit the Hardware Unboxed channel"
Hardware unboxed is run by a bunch of histrionic clickbait prostitutes ...
my twin 1950x/2080FTW3 still going 24/7 years later, still dont regret building rigs with it.
i want to stream valorant, black ops 6, and gta 6 in the future, and price is no option. I ordered the 9950x today, did I make a mistake?
yes u should get a:1 ccd chip aka 9700x or b:9800x3d bcos u dont really need tons of cores they just need to be fast and 9950x still has latencys between ccds and one is actually running I think 300,400 mhz higher than other..in short its waste and useless for gaming I mean its ok but ther would be no difference using 9700x in fact with 350€ left in wallet u could get better gpu witch would get u more...so yeah 9700x but if I think more nah..on long run pay 150 more and get 9800x3d
Is there a better indication that there is a software problem in Windows than 10 extra frames by running a game as administrator.
I love Wendel's videos. They are excellent. He is awesome.
If Microsoft decide to optimize their OS and vendors decide to do it as well, as they should, these CPUs will age like fine wine.
Huh? You want Microsoft to add more useless features that bloats the system and nobody uses? Don't worry, they're on it.
But never buy something based on potential future updates. More often than not the wine just becomes vinegar.
@@aelderdonian that statement is generally true if the update is promised by the same company that sold you the hardware, where the company is not financially incentivized to provide you software updates down the road, but we are talking about Microsoft that needs to fix their OS and specifically their scheduler, AMD has already done their job and their hardware works fine on Linux, it's 100% Microsoft's responsibility to fix their shit here
but microsoft would have several antiviruses runing on each core - the searchengine in windows 10 are as slow as running cyberpunk on a singlecore cpu and has been the last 8 years - I guess it has a conference with microsof if im legit to search on the harddrive
If ..if..if...we buy products based on their current performance or features not something fanboys hope will happen in the future
This CPU generation is getting some of the most different opinions across the lineup of critics I've seen in a while.
Steve: "Meh."
Linus: "Cool!"
Level1Techs: "Get it for an upgrade of three years or more."
Hopefully the X3D ones show a better, more obvious improvement. My opinion is that you just don't get the performance for the value at all. I guess I'm mostly with Steve (GN).
I appreciate this review. I'm someone who just started buying my parts in preparation for this CPU. I'm coming from the Intel i7-7800X and desperately need a new CPU, but when the rumors started circling around about the 9000's I decided to delay building a new PC. Seeing all the doom and gloom around it was a little concerning, but I mostly use Blender/productivity software so seeing the boosts there had me pretty happy, gaming is secondary, so as long as the game runs better than what I currently am dealing with then it's good for me. Though I gotta say I've been thinking of checking out Linux for a while now since I don't agree with the crap they are pushing into Windows. Maybe I'll do a dual boot so I can check it out.
Damn it was actually 9:30 PM when I started this video, you creeped me out 😮💨
I'm definitely not upgrading right now but I'm excited for when I eventually do.
If you ask me, the most valuable info in this video was the little "RAM break" talking about how and which ram runs, since Ryzen CPUs basically "stand or fall" with the RAM choice.
I'm running a 2950X as a streaming PC, so I have some PCIe cards in it for capture, and there might be other things soon. The 5600X in my gaming PC isn't THAT far away from it in processing power, and it's fascinating. For processing power, I could definitely go with many AM4 and AM5 CPUs for the streaming PC, but I'd lack the PCIe lanes, so any upgrade will be very expensive.
thanks for the info Wendell
i love the johnny5 reference, was one of my favorite movies as a kid
In regards to administrator providing better performance, this has been known for many years amongst the game cracking scene. Many cracked game installers will run the game as admin by default. There's also other reasons for this, like trying to reduce problems people may encounter. But this can obviously be abused by dodgy files. Running as admin certainly wont provide performance benefits with every single game, just occasionally. I've no idea why this happens, but it's been a thing for a very long time.
Amazing work!
finally a comprehensive review. Gen-on-gen improvement is unimpressive, but fingers crossed it's a stepping stone to something better (10050X perhaps?)
Another fine review. 👍🏻
But I’ll properbly stay on my 3900x.
It would be really good to have a dedicated video on RAM for AM5 + Ryzen 9000 / Zen 5 (+X870E).
Its sad that ryzen 9000 is just a percentage or 2 faster than ryzen 7000. Not worth that upgrade
I think you are one of the best youtube tech gods, you and Steve (Tech Jesus) from Gamers nexus the goverment of the us should pay you two guys.
Imagine windows being the problem and its neutering performance. I'm glad someone is saying something. No one else seems to have brought this up until you did.
You've got some crazy bags under your eyes, I hope you will be able to rest now :)
Thank for your hard work
I got Rocket Lake because of AVX-512 for RPCS3. Love my 11400, but even it can light on fire with a Z board and full limits removed, especially with AVX-512. 150W limit for that chip, 175W burst for my twelve year old Hyper 212 Evo (with new fan) before I get uneasy with temps.
Hey Wendell,
after reading about the "Administrator" account performance gains, my thoughts went directly to the large page support. It's locked behind a security group policy AND requires to run the exe elevated.
The GPO is under Computer configuration -> Windows settings -> security settings -> Local policies -> user rights assignment, called "Lock pages in memory".
Linux iirc has large/huge pages support by default
This was/is commonly used for mining applications where the gains are pretty similar, now I know there are some stability implications with memory allocation, I can't confirm if that's the main difference from the disabled Administrator account
Since I don't own a 9xxx series, maybe you could test if this is the setting that gives that extra performance?
It's a great take with solid hypothesis on why Zen 5 isn't performing as it should. Anandtech did a great job at analyzing inter-core latency, maybe it's not the fact that they used the same cIOd they used on Zen 4, maybe it's windows 11 scheduler that isn't supporting this new uArch properly yet?
I blame windows when stuff is funky, happy to know thats still good to do
5:32 "PBO doesn't add anything" this is not true. You have simply toggled it on without tuning it. using Curve Optimizer will result in the same watts as stock and yet better performance and lower temperatures than just PBO. I wish people would use the tools given by AMD to benefit themselves. It's just odd to shoot yourself in the foot especially as an enthusiast. Leo from Kitguru has demonstrated this.
Interesting point that you make about Linux vs. Windows game performance. I have been seeing hard evidence that GPU performance (NVIDIA) is better on Linux machines over windows for AI oriented tasks, like embeddings, for example. My theory is that that CUDA driver performance is heavily optimized for linux due to heavy usage in Transformer model training by the likes of OpenAI, etc.
One thing I haven't seen: If you use something like Proxmox, NUMA correctly configured to use a correct CCD and no SMT passed on to the Windows VM, how does the game work? What about Linux? I find that sometimes you can tweak KVM to do stuff that can get pretty close to optimal performance in baremetal. Thanks for the great work!
R5 3600 still running strong here. At 1440×3440, it probably still has headroom (paired with a 3070 tie)
I would love to see a dedicated video about selecting memory on consumer and enterprise boards and comparing timings
So far, the only video that is not a biased anti AMD hit piece.
Thank you Wendell!!
Watching this, hearing you say Windows is doing something weird, and having just seen Hardware Unboxed's video about what I expect is the same issue is interesting.
I’d like to see more about how Windows could be the problem w/ consistency, since Jayztwocents was stating there was wacky behavior between Intel & AMD non-3D processors, AMD 3D models were performing similar to Intel’s offerings. Jay was wondering if it’s how games are handling the multi chip architecture of the 3D models vs their non-3D counterparts, think single socket vs dual socket systems as an example what thought line he’s going with.
My thought after your video is, what if Jay’s theory should be in relation to the CPU & Windows vs the CPU & games.
As an unrelated side note, I’d double check your test result slide titles since there are few that the title said 1080P when you were showing the 1440P & 4K results.
Appreciate the review!
Yeah time to upgrade...next year , from tr 3970x to tr 7970x. ❤
It would be interesting to see if running games off a Dev Drive in Windows would somehow improve performance skmilar to running them as Admin.
While gaming is undeniably fun, these powerhouse 16-core processors are built to unleash your full potential beyond just gaming. They're designed to handle the most demanding tasks like professional video editing, 3D rendering, and complex simulations, empowering you to create and innovate without limits.
Great video
Have you observed a similar performance difference when running games as Administrator on a Intel system?
I have a 5900x and i just don't see it yet. I was hoping for a bigger step up this generation. Most really intensive tasks i do on GPU.
Hey, you got me there; I thought you meant the threadR 39xy series 😉😉
would be interesting to see single-thread benchmarks with the other cores/threads pinned by some other benchmark/stress-test