Based on Derbauer and other's super quick and dirty OC results, the gaming uplift is quite substantial over stock, even just with a 200mhz OC - and its suspected that 300-400mhz OC might be possible with spending some serious time and effort with it.
@@silentx9709 9000 series x3D is fully unlocked. Any x50 or x70 series mobo (which is all of them that any enthusiast would buy), can overclock it easily - and 9800x3D has pretty good headroom.
@@WaspMedia3D Keep in mind that zen 5 fixed the Windows lag that made 7800X3d's terrible to use for productivity outside of games. It still gets stomped on by a tuned 14900K but if your a noob who cannot tune your PC then buying a 9800X3D, getting a 6400mhz AMD EXPO memory kit and then just upping your all core frequency as high as you can get it and setting memory to 6000-6200-6400mhz in gear-1 is all you really need for most games. However in games like RUST the 9800x3D still dips to like 49-fps on the 1% lows in a high pop server so its completely unusable dogshit for me. But if you are a normie casual its perfectly fine or if your a turn based 4X or sim game player its probably the fastest CPU for that kind of game.
Basically Bartlett-Lake IF its still going to release with 12 p-cores, is going to be the 10900k for the next years until both amd/intel get their latency in check.
My 10900K is mint and I've not even tuned it apart from turning off hyperthreading, it's not even running fast memory. I'm just thinking about upgrading for next seasons HwBot overclocking league to get some decent benchmarks. I'm currently 7th in the UK this season with just a home brew chilled setup. Probably going 12th gen as I have a board that should overclock locked CPU's and don't want to update the bios for newer ones it case it loses that capability.
I wish you benchmarked more esports titles like CS2, Valorant and RS6. These games are primarily CPU and Memory limited especially if you are trying to raise your 0.1% lows above 360Hz monitor's refresh rate and have huge playerbases.
@@femiolotu6482 $100 more gets you a 9950X with twice the cores/threads, the 9800X3D is an awful value. I'm on a 12700k/DDR4 and want to upgrade but there's no way I'm buying another 8 core. Yeah I mostly game but I have loads of crap open in the background that needs CPU cycles too. The 12700k barely hangs in there depending on what game I throw at it. Really hope the 9950X3D releases sooner than later and is no more than $700 while having cache on both chiplets.
@@matgaw123 $30 isn't a big increase over last gen - considering the perf gains and competition's prices at their perf. $110 cheaper than 285k and way better perf. Don't get me wrong, I wish it was less too, but in the grand scheme its hard to complain.
@@CoolStuff-jelloIs not monolithic ring bus like ever Intel Core i series chip since Sandy Bridge to Raptor Lake. The P-core and E-core layout is poorly made on the sillicon chip, the memory controller is way outside the compute die, there are too many flaws.
My 265 out of the box scored ~35000 in Cinebench 23. My entire tuning consisted of setting p-core to 56 and e-core to 50 and Cinebench jumped to 39500, virtually identical to my 14900k with the latest 12b microcode. I'm sure it can be tuned higher but I buy my systems to use, not to tune.
just assembled a 5900x for a friend of mine!a bit under 21000 in r23! I thought that was fast!about 80 dollars cheaper but kinda waste of money, still got a crap cpu cooler on it though!is the 12900k still a good buy or are the microcodes saving 13/14th gen K versions now?
@sperrex465 Clearly you missed my point, it's not about playing Cinebench, it's about comparing before and after a change. Got it? And I see a similar improvement in the games I play. I do hit an fps wall but there are big improvements with simple oc's.
It's awesome that you went through all that. But it seems like a lot of effort for something that will get demolished by stock settings on a 9800X3D processor at only 6000MT/s memory.
the damage is already done. The mainstream tech tubers already brainwashed people to not buy 13/14th gen so those consumers are going to experience the AMDip
Worked out for me. I never buy a chip that was just released. I always buy last year's chip to save money and because things like this happen all the time. I even bought mine in the middle of the degradation drama. People were returning their recently bought 13 and 14 gen CPUs out of fear. I wasn't worried about the degradation since I knew how to avoid it in the bios. But when I was buying the chip the microcode was in the process of being released anyway. Got a brand new open box 13700k for $230. A brand new open box rog strix z790 for $211.
Because he's presenting a best case scenario of less painful almost plugnplay like experience, yet top percentile stock. 24gbit are less stressful on the IMC.
Yeah _way_ less efficient, and far more expensive ... from some quick and dirty OC testing with 9800x3D, the gaming perf uplift when OCd is impressive, with some results showing 1% lows higher than almost every other CPU's average in some games. AMD killed it with that CPU, which is good to see considering the standard Ryzen was kinda meh.
@@WaspMedia3D This very cpu (the 265K) is actually at least 100$ cheaper than the 9800x3d at current prices (where you can get it at all), not to mention the scalped ones, there it becomes twice cheaper.
I think the wall he's referring to is the DLVR. It can be disabled when the temperature reaches 10°C or lower. I'm also not sure if the POWER GATE mode is only available at the low temperatures. I'm convinced they added the regulator to prevent hacking using high voltages which I think is the reason for the Microcode issue in the previous generations.
Cache and e core over clocking has so far showing the biggest gains. Also the internal voltage regulator bypassed and commanding the actual safe voltage has shown improvements also. I hope they fix all this stuff.
This architecture is Intel's Piledriver (Meteor Lake is Bulldozer). It's fundamentally flawed, can't be fixed with software updates. Raptor Lake meanwhile hardware wise is the best of monolithic Ring Bus and the bottleneck is software. The i9-13980HX is the most overkill mobile CPU ever, i was blown away by how power efficient it is on laptop and such a productivity beast.
@@saricubra2867 Just stop. Piledriver was a Bulldozer refresh. Meteor Lake is a laptop generation. And Bulldozer was 5 years behind Intel on IPC, it was a complete failure. This is a massive leap forward in multicore and roughly the same in gaming - which will probably be the same or better in the next 6-12 months of patches - which is in no way the same as half the gaming speed and slower in multicore.
@@CyberneticArgumentCreator He's so wrong, it's not even funny. I hope he learns. What's sad is that I've seen other comments like that. Or maybe it was also him on other videos, I'm not sure.
Can you please do a test where you compare 8400 CUDIMM or whatever max speed you can get with tuned 8000 A-Die on Arrow Lake? Really curious if tuned A die gives it even better performance or if CUDIMM is the way to go on this platform.
So... get Arrow lake if you like to tinker, but otherwise the advice is the same - X3D chips for people who can't / won't tune their system, 14900k for people who want to tune and get max FPS
The only problem that i see with that chip is that an i9-13980HX mobile is faster on games (36MB of cache), on Cinebench is the same perfomance and maxes out at 120-140 watts. Over 100 watts less than the 13700K.
I just watched a video with Robert Hallock and how they are reverse engineering the problem with Arrow Lake's launch. There is no doubt that Intel screwed up here. There are so many things to worry about at launch, but they didn't prioritize benchmarking and making sure that both bios and windows settings were accurate to get the most out of this chip. According to Robert, there are latencies that can be taken care of because of how fast these e-cores are but are not being utilized because Intel didn't address the software like they should have. I would be willing to bet that many are going to eat their words once we see these latencies taken care of. The chip has massive multi-threading and single threading potential, especially when considering that it is the only chipset that is taking advantage of the new cudimms. I would be willing to bet that we will see 7800x3d gaming performance and complete dominance in workstation tasks which will make it the most desired chip for most. Wait six weeks
Why Intel don't just freaking make motherboards and hire guy like you to input the best best settings already and launch and sell the products after... I mean how the hell they don't do that, its almost crazy to me.
I have a 10900k and a 7900XTX. I was wondering why when Enabling Re-Sizable Bar, why does it hurt cpu performance? In COD, for example, enabling Rebar tanks cpu 1% lows down to 115fps at 1440p very low settings. When disabling Rebar, 1% lows on cpu shoot up to 200fps but hurts the gpu a bit. I'm just curious as to why cpu performance gets nerfed when having rebar on is all.
@@saricubra2867 There is no way your system would be CPU thread starved on regular desktop use. You need to be running Cinebench or rendering tasks (or something similar) for that to happen. Anyhow, why did it happen to you? I have no idea.
I'm so glad I bought that course. I'm not getting an Ultra. I'm continuing what I started doing years ago of upgrading to the best of the previous generation. I'll use the course to tune my 14900k, and then I'll use it to tune my 285k or succeeding generations.
Were you able to verify that using one P core and the E cores is faster in gaming because supposedly Windows doesn't get awkward deciding where to send the work load. Just what I read and this was a fascinating piece which interests me more as my 5800 X3D, well, I would like to upgrade my box at some point. Thanks
WARNING! WCCFTECH says that Intel is disabling DLVR bypass on the Core Ultra, with the Microcode 0x122, reserving it only for extreme overclocKing scenarios (nitrogen cooling).
Paper launch. That said I think it was to get the 265K/245K out that are most needed. 285K I predict will launch with Battlemage with more options to overclock and when win11 has proper support and BIOS's are up to date. As is now 265K easily gets ~35% performance boost and temps are still in check. That's huge. We might have different micro codes also to open the CPU potential as it is now very underpowered. It might result in 285K and 285K Extreme (290K) for the next launch. Looking back on the history with 13th and 14th I think INTEL had to take this path to make sure motherboard manufacturers are following INTEL guidelines that we are not getting the same fails again that I put 100% to ASUS. That's why I also think most of the tests were on ASUS boards to raise awareness. Very poor results compared to MSI for example and even on ASUS extreme profile it was just frying the CPU without any noticeable gains. I think there is more to it. Edit. PCIe 5.0 with direct access to CPU and 8 pin power on selected MB's and APO paired with Battlemage we are looking at 5080 performance with $500-1000 less coin.
The l3 cache in arrow lake has higher latency than raptor lake, and combined with the extra memory latency it will never compete with tuned 13/14th gen.
At the end of the day if i have to get specialist knowledge to run this CPU at a competitive rate then i think i just walk away, i don't want to even think what damage might be done in the long term as who knows?
It's the cache, is just not enough. The memory controller's placement on the sillicon is also wrong but since Intel focused on AI, NPU nonsense (NVIDIA graphics cards are lightyears better for AI Machine Learning stuff) they nerfed CPU perfomance.
I call it the GLUE DIP. Intel tiles are glued/fused closer so they should have an edge against the AMD's zen chiplet architecture. Intel needs to work harder on improving cache and IMC. Also bring back the the damn hyperthreading man! Dump the NPU for desktop chips. This review was more fun to watch than the 2024 US election results.
9800x3d, PBO, AMDip, Urzikstan, Area99, rebirth island, Verdansk... "AMD has worked directly with Activision on optimizing performance for Call of Duty: Black Ops 6 on AMD Ryzen™ CPUs and Radeon™ RX GPUs to deliver a fantastic experience"(25th oct 2024).
There will be faster memory in 15000 MT range and pcie5 ssd speeds are going to mature through 2025, well above pcie4 speeds. Once that comes about I will look into ARL and your course. Is there a Discord for your course? Intel also has a tuner that optimizes certain games?
15000 MT/s ? On DDR 5 ? I don't think we'll ever get that. Just like we didn't got DDR4 at 7500 MT/s. Over 8000 MT/s out of the box is already awesome and arguably better than what we had on DDR4. Other than people going for records, I don't think we'll going to see 10000 MT/s or more. 8800-9600 will be the sweet spot.
i wouldn;t worry about it when they been using DLVR on arrow lake first time and its designed as fail safe mechanizing to prevent the chip gonna over voltage from ring bus clocks is miles better than amd using on their SOC voltages is still catching on fire due to voltage gonna over 1.35v by OC from user error and boards are vendors are limited their boards using higher memory frequency than 6400 can handle in expo its why they add extra clock cycle for higher frequency to prevent memory asking SOC more bandwidth
I think the reality is quite simple. Intel contrary to popular belief of current events is quite smart. They don’t care about gaming and little whiny babies. Those guys don’t own shares. Gaming has and always will be about GPU, CPU is about compute, and in the AI age, real parallel processing power = money. Yeah, ok there is a regression in gaming…but bro, if you can see above 200fps, get off your ass, hit the gym, go to university and become a fighter pilot or F1 driver, wasting your gifts on COD. Personally from a financial standpoint and long term outlook I think AMD with their X3D and superior gaming performance is ignoring the way the market is going. Jufes I think your scale is off on the competency, 9-10 is like buildzoid, kingpin, der8auer. Guys that will solder shit to the mobo and cpu to win world records, not little jimmy in his mom’s basement that knows how to navigate an ASUS bios.
Gaming perf is a marketing tool. Intel will still make money chasing AI, but even there competition is extremely stiff with Nvidia far on top and AMD right behind. Marketing for long term mindshare is very expensive, and important for the whole of the business and that's where gaming products and perf comes in.
there are no games even worth going any higher than what I got already. ITS ALL JUST AI AND DATA CENTER SHIT FROM HERE GAMER BROS I mean what GTA 6 whenever ???
cant wait to see max oc 9800x3d
Based on Derbauer and other's super quick and dirty OC results, the gaming uplift is quite substantial over stock, even just with a 200mhz OC - and its suspected that 300-400mhz OC might be possible with spending some serious time and effort with it.
@@WaspMedia3D Some motherboards allow manual overclock in advanced cpu tab.
@@silentx9709 9000 series x3D is fully unlocked. Any x50 or x70 series mobo (which is all of them that any enthusiast would buy), can overclock it easily - and 9800x3D has pretty good headroom.
@@silentx9709 All 9000x3D are fully unlocked for OCing.
@@WaspMedia3D Keep in mind that zen 5 fixed the Windows lag that made 7800X3d's terrible to use for productivity outside of games. It still gets stomped on by a tuned 14900K but if your a noob who cannot tune your PC then buying a 9800X3D, getting a 6400mhz AMD EXPO memory kit and then just upping your all core frequency as high as you can get it and setting memory to 6000-6200-6400mhz in gear-1 is all you really need for most games. However in games like RUST the 9800x3D still dips to like 49-fps on the 1% lows in a high pop server so its completely unusable dogshit for me. But if you are a normie casual its perfectly fine or if your a turn based 4X or sim game player its probably the fastest CPU for that kind of game.
Basically Bartlett-Lake IF its still going to release with 12 p-cores, is going to be the 10900k for the next years until both amd/intel get their latency in check.
I hope bartlett is released, but I dont think it will be.
Really hoping for a 12 core ring design on Intel 7Nm or 10Nm
thats chip is not ever happening man, intel future for gaming is dead
Same here, I cannot stop using my 10900K as it’s the least latency in apps and best in gaming for me at 4K 240hz.
My 10900K is mint and I've not even tuned it apart from turning off hyperthreading, it's not even running fast memory. I'm just thinking about upgrading for next seasons HwBot overclocking league to get some decent benchmarks. I'm currently 7th in the UK this season with just a home brew chilled setup. Probably going 12th gen as I have a board that should overclock locked CPU's and don't want to update the bios for newer ones it case it loses that capability.
I wish you benchmarked more esports titles like CS2, Valorant and RS6. These games are primarily CPU and Memory limited especially if you are trying to raise your 0.1% lows above 360Hz monitor's refresh rate and have huge playerbases.
That 9800X3D leak looking spicy 🌶️
Tho the price 😢
@@femiolotu6482 yeah but 7800x3d is like 400+-
@@femiolotu6482 Still too high.
@@femiolotu6482 $100 more gets you a 9950X with twice the cores/threads, the 9800X3D is an awful value. I'm on a 12700k/DDR4 and want to upgrade but there's no way I'm buying another 8 core. Yeah I mostly game but I have loads of crap open in the background that needs CPU cycles too. The 12700k barely hangs in there depending on what game I throw at it. Really hope the 9950X3D releases sooner than later and is no more than $700 while having cache on both chiplets.
@@matgaw123 $30 isn't a big increase over last gen - considering the perf gains and competition's prices at their perf. $110 cheaper than 285k and way better perf. Don't get me wrong, I wish it was less too, but in the grand scheme its hard to complain.
The latency added by moving to a tile based package is hurting this CPU in gaming
they also removed the IMC, so how can we tell which is causing the trash performance?
@@CoolStuff-jelloIs not monolithic ring bus like ever Intel Core i series chip since Sandy Bridge to Raptor Lake.
The P-core and E-core layout is poorly made on the sillicon chip, the memory controller is way outside the compute die, there are too many flaws.
Makes you wonder what AMD is doing diferently.
@@PwntifexMaximus i'd say that intel gets worst of both worlds because they don't commit to either monolithic or chiplet
@@PwntifexMaximusBrute force cache capacity.
FINALLY BEEN WAITING FOR FRAME CHASERS VIDEO THIS SINCE LAUNCH
My 265 out of the box scored ~35000 in Cinebench 23. My entire tuning consisted of setting p-core to 56 and e-core to 50 and Cinebench jumped to 39500, virtually identical to my 14900k with the latest 12b microcode. I'm sure it can be tuned higher but I buy my systems to use, not to tune.
just assembled a 5900x for a friend of mine!a bit under 21000 in r23! I thought that was fast!about 80 dollars cheaper but kinda waste of money, still got a crap cpu cooler on it though!is the 12900k still a good buy or are the microcodes saving 13/14th gen K versions now?
nobody cares about cinebench scores here. Do you play cinebench all day? It can score 999999999 but if it sucks for gaming i ain buying that sheet
@sperrex465 Clearly you missed my point, it's not about playing Cinebench, it's about comparing before and after a change. Got it? And I see a similar improvement in the games I play. I do hit an fps wall but there are big improvements with simple oc's.
@@johnbernhardtsen3008The 5900X has to be cheaper than a 12700K or 13600K for it's price to make sense.
@@saricubra2867 its about 15 dollars cheaper than the 12700k here in denmark!
It's awesome that you went through all that. But it seems like a lot of effort for something that will get demolished by stock settings on a 9800X3D processor at only 6000MT/s memory.
I swear they released the core ultra series to push people back into buying 13/14th gen which is actually made in their own foundries 😂
Seems like a rocket lake 2. When ppl were buying 10th Gen .
the damage is already done. The mainstream tech tubers already brainwashed people to not buy 13/14th gen so those consumers are going to experience the AMDip
@@maurob13Worse than Rocket Lake, IPC regressed. Meteor Lake mobile for example is worse than Raptor Lake mobile.
Worked out for me. I never buy a chip that was just released. I always buy last year's chip to save money and because things like this happen all the time.
I even bought mine in the middle of the degradation drama.
People were returning their recently bought 13 and 14 gen CPUs out of fear. I wasn't worried about the degradation since I knew how to avoid it in the bios. But when I was buying the chip the microcode was in the process of being released anyway.
Got a brand new open box 13700k for $230. A brand new open box rog strix z790 for $211.
Same thoughts. 14900K is currently priced to compete with 9800X3D.
Noticed that the "stock" kit was 48 GB and the "Max OC" kit had 32GB. Which kit did you use for the other one?
Looks like M die and A die
Because he's presenting a best case scenario of less painful almost plugnplay like experience, yet top percentile stock. 24gbit are less stressful on the IMC.
Again, the best Intel chip tweaked to death barely competes with the latest 3D AMD CPU at stock - now the 9800X3D - and still less efficient!
Yeah _way_ less efficient, and far more expensive ... from some quick and dirty OC testing with 9800x3D, the gaming perf uplift when OCd is impressive, with some results showing 1% lows higher than almost every other CPU's average in some games. AMD killed it with that CPU, which is good to see considering the standard Ryzen was kinda meh.
@@WaspMedia3D can you specify what were those 0.1% lows and in what games and your full system specs.
Yeap, Jufes is either in denial, or trying too hard to sell his course lol
@@WaspMedia3D This very cpu (the 265K) is actually at least 100$ cheaper than the 9800x3d at current prices (where you can get it at all), not to mention the scalped ones, there it becomes twice cheaper.
@@HenrySomeone Oh sorry, I was mistakenly referring to 285k ... but neither of those really competes with the 9800x3D.
AMDip vs Diptel...who will prevail.
Hahaha DIPTEL
Bintel
@@Deezorz🤡
Awesome video, been waiting for it for a while!
25m long video comes out 2m ago and bro has already watched the whole thing and left a comment.
@@onsh that's what fine tune gives you.
I think the wall he's referring to is the DLVR. It can be disabled when the temperature reaches 10°C or lower. I'm also not sure if the POWER GATE mode is only available at the low temperatures. I'm convinced they added the regulator to prevent hacking using high voltages which I think is the reason for the Microcode issue in the previous generations.
I know the mother board partners are pissed. How can you sell anything with no processors 😂😂😂
Arrow Dip
Professor FPS. Much Thanks man
Cache and e core over clocking has so far showing the biggest gains. Also the internal voltage regulator bypassed and commanding the actual safe voltage has shown improvements also.
I hope they fix all this stuff.
This architecture is Intel's Piledriver (Meteor Lake is Bulldozer). It's fundamentally flawed, can't be fixed with software updates.
Raptor Lake meanwhile hardware wise is the best of monolithic Ring Bus and the bottleneck is software.
The i9-13980HX is the most overkill mobile CPU ever, i was blown away by how power efficient it is on laptop and such a productivity beast.
@@saricubra2867 Just stop. Piledriver was a Bulldozer refresh. Meteor Lake is a laptop generation. And Bulldozer was 5 years behind Intel on IPC, it was a complete failure.
This is a massive leap forward in multicore and roughly the same in gaming - which will probably be the same or better in the next 6-12 months of patches - which is in no way the same as half the gaming speed and slower in multicore.
@@CyberneticArgumentCreator He's so wrong, it's not even funny. I hope he learns. What's sad is that I've seen other comments like that. Or maybe it was also him on other videos, I'm not sure.
Idk if you seen on twitter that turning off all but 1 p core actually improved performance. It’s either cache limited or inter connect limited
Chiplet = Diplet?
Bintel real
Can you please do a test where you compare 8400 CUDIMM or whatever max speed you can get with tuned 8000 A-Die on Arrow Lake? Really curious if tuned A die gives it even better performance or if CUDIMM is the way to go on this platform.
Can you also get the 7800x3d and 9800x numbers?
im very sure all core ultra reach 6ghz easily, not certain, anything above are bonus
Brian from Tech Yes City OC'ed the e-cores and saw a significant performance increase in gaming and apps.
Don't say that goober's name please
@@CyberneticArgumentCreator Brian
Brian
Brian
@@CyberneticArgumentCreator I'm not familiar with him, what did he do ?
Hes jufes's friend@@Winnetou17
@@Winnetou17 He blames Intel's ineptitude on DEI hires instead of poor management.
finally the real arrow lake review on RUclips dropped
So... get Arrow lake if you like to tinker, but otherwise the advice is the same - X3D chips for people who can't / won't tune their system, 14900k for people who want to tune and get max FPS
The only way I would get an X3D chip was if I only gamed.
@@DingleBerryschnapps troll
*Cough* Powerbill *Cough*
@@DingleBerryschnapps troll
@@PwntifexMaximusonly poor people cry about it
still loving 13700k
Shit I still loving my 10700k lol
The only problem that i see with that chip is that an i9-13980HX mobile is faster on games (36MB of cache), on Cinebench is the same perfomance and maxes out at 120-140 watts. Over 100 watts less than the 13700K.
@@saricubra2867 oh well shit lemme just junk my whole pc and get a laptop then lol
@@saricubra2867 you might as well have said 9800x3d
@@michaelmcconnell7302 The i9-13980HX has 24 cores and 32 threads.
Can we expect something like Ultra 9 290K and Ultra 7 270K to achieve the missing performance in the new processors?
there is no missing perf, the 6 and 8 cores variant are enough for you to play high refresh rate gaming
I just watched a video with Robert Hallock and how they are reverse engineering the problem with Arrow Lake's launch. There is no doubt that Intel screwed up here. There are so many things to worry about at launch, but they didn't prioritize benchmarking and making sure that both bios and windows settings were accurate to get the most out of this chip. According to Robert, there are latencies that can be taken care of because of how fast these e-cores are but are not being utilized because Intel didn't address the software like they should have. I would be willing to bet that many are going to eat their words once we see these latencies taken care of. The chip has massive multi-threading and single threading potential, especially when considering that it is the only chipset that is taking advantage of the new cudimms. I would be willing to bet that we will see 7800x3d gaming performance and complete dominance in workstation tasks which will make it the most desired chip for most. Wait six weeks
Why Intel don't just freaking make motherboards and hire guy like you to input the best best settings already and launch and sell the products after... I mean how the hell they don't do that, its almost crazy to me.
I have a 10900k and a 7900XTX. I was wondering why when Enabling Re-Sizable Bar, why does it hurt cpu performance?
In COD, for example, enabling Rebar tanks cpu 1% lows down to 115fps at 1440p very low settings. When disabling Rebar, 1% lows on cpu shoot up to 200fps but hurts the gpu a bit.
I'm just curious as to why cpu performance gets nerfed when having rebar on is all.
smth very wrong with your pc, usually enabling rebar or sam increase fps
@@iikatinggangsengii2471 I think every game is diffrent
@@winonesoon9771 not on amd
Jufes, as far as Windows snappiness goes, which one would you say fares better: your 9950x workstation or the tuned 265k?
9950X has 32 threads, it wins by default.
@@saricubra2867 No it does not. Amount of threads does not dictate system snappiness. That is misinformation.
@@SocratesWasRight Are you sure?, why then when i turned off the e-cores on my 12700K it felt so sluggish?.
@@saricubra2867 There is no way your system would be CPU thread starved on regular desktop use. You need to be running Cinebench or rendering tasks (or something similar) for that to happen. Anyhow, why did it happen to you? I have no idea.
@@SocratesWasRight It felt sluggish when i disabled those cores. I guess by your logic a quad core is the same as a 32 core.
Hell yeah, good sauce man! 👍🏻
Awesome video, can't wait to see the next one.
I'm so glad I bought that course. I'm not getting an Ultra. I'm continuing what I started doing years ago of upgrading to the best of the previous generation. I'll use the course to tune my 14900k, and then I'll use it to tune my 285k or succeeding generations.
You should get a 9800x 3d, the reviews are out, it's end game CPU.
@@Zoomborgmaybe he isn't gaming
Did you check latency from e core > L3 and RAM vs P core latency
Were you able to verify that using one P core and the E cores is faster in gaming because supposedly Windows doesn't get awkward deciding where to send the work load. Just what I read and this was a fascinating piece which interests me more as my 5800 X3D, well, I would like to upgrade my box at some point. Thanks
WARNING! WCCFTECH says that Intel is disabling DLVR bypass on the Core Ultra, with the Microcode 0x122, reserving it only for extreme overclocKing scenarios (nitrogen cooling).
Chips and Dip video.
Paper launch. That said I think it was to get the 265K/245K out that are most needed. 285K I predict will launch with Battlemage with more options to overclock and when win11 has proper support and BIOS's are up to date. As is now 265K easily gets ~35% performance boost and temps are still in check. That's huge. We might have different micro codes also to open the CPU potential as it is now very underpowered. It might result in 285K and 285K Extreme (290K) for the next launch. Looking back on the history with 13th and 14th I think INTEL had to take this path to make sure motherboard manufacturers are following INTEL guidelines that we are not getting the same fails again that I put 100% to ASUS. That's why I also think most of the tests were on ASUS boards to raise awareness. Very poor results compared to MSI for example and even on ASUS extreme profile it was just frying the CPU without any noticeable gains. I think there is more to it.
Edit. PCIe 5.0 with direct access to CPU and 8 pin power on selected MB's and APO paired with Battlemage we are looking at 5080 performance with $500-1000 less coin.
I’m hoping he will review the 2 dimm asrock z890 taichi OCF
Thank you, great video.
The fact that this needs that much tuning is appalling something is very wrong with intel at this point
I found the 285k but, I be damn where to get the 8400 cu-dimm from.
Amazing video. Keep up the good work
Delid and push the 9800x3d, easiest bundle sell ever.
Been Waiting for this one dude!
which was the OC settings and max clock?
crazy that they went to tsmc for this
is 9800x3d a good buy for a gamer seeing alot big gains how much vs my 11900 on a 3080
The l3 cache in arrow lake has higher latency than raptor lake, and combined with the extra memory latency it will never compete with tuned 13/14th gen.
Awesome review
At the end of the day if i have to get specialist knowledge to run this CPU at a competitive rate then i think i just walk away, i don't want to even think what damage might be done in the long term as who knows?
Is it worth running CuDIMM RAM over XMP for better performance with these new chips?
So, Arrow Lake's CCD latency is superior to a tuned 9950X? Jufes seemed pretty impressed with the CCD latency on Zen 5...
I'm waiting for the 9800x3d max tune video heh
can you compare 265k and 9800x3d in 2k/4k unltra setting to show ppl there is no difference when you are gpu bound
good day amigo, also headpats for the boss
Yeah 285K has been so hard for to get, finally getting mine tomorrow.
arrow to the heart 😭
Ryzen 7 9800X3D is the new king !!!
it's gotta be the cache holding it back, or, the Foveros 3d packaging.
It's the cache, is just not enough.
The memory controller's placement on the sillicon is also wrong but since Intel focused on AI, NPU nonsense (NVIDIA graphics cards are lightyears better for AI Machine Learning stuff) they nerfed CPU perfomance.
I bet you could overclock a 14900ks to run as fast as the 285k with hyperthread off
I call it the GLUE DIP. Intel tiles are glued/fused closer so they should have an edge against the AMD's zen chiplet architecture. Intel needs to work harder on improving cache and IMC. Also bring back the the damn hyperthreading man! Dump the NPU for desktop chips.
This review was more fun to watch than the 2024 US election results.
If this is worse than Alder Lake or the IPC is such a regression on the P-cores it's either a Bulldozer moment or Piledriver...
Are you telling me that Arrow Lake is slower than my Alder Lake i7 12700K with JEDEC DDR5 4800 while using windows explorer?
LOL no, get better ram, disable e cores, and OC that ring tho. Ion wanna hear nun else.
There have been reports that Intel runs way fast with only 1 P core enabled. so 1P 16E and 1P 8E
9800x3d, PBO, AMDip, Urzikstan, Area99, rebirth island, Verdansk... "AMD has worked directly with Activision on optimizing performance for Call of Duty: Black Ops 6 on AMD Ryzen™ CPUs and Radeon™ RX GPUs to deliver a fantastic experience"(25th oct 2024).
There will be faster memory in 15000 MT range and pcie5 ssd speeds are going to mature through 2025, well above pcie4 speeds. Once that comes about I will look into ARL and your course. Is there a Discord for your course? Intel also has a tuner that optimizes certain games?
15000 MT/s ? On DDR 5 ? I don't think we'll ever get that. Just like we didn't got DDR4 at 7500 MT/s. Over 8000 MT/s out of the box is already awesome and arguably better than what we had on DDR4. Other than people going for records, I don't think we'll going to see 10000 MT/s or more. 8800-9600 will be the sweet spot.
Only arrow lake review that matters
He deserves way more than 33k subs.
So Intel should make gaming CPU now... ;-)
It's called Core i series.
How come xmp shows 48gb but tuned shows 32gb. Did you end up moving to a different kit to get more out of it?
Yessss finally!!
That hoodie is literally fire 🔥
just need to over clock d2d, ngu, ring, and memss for ez gains
Sick how mutch potenzial left on the table. Why did intel do this? Im gona buy the 265k.
I can't see myself buying Intel again after the 13900ks
What is wrong with it? Honest question
i wouldn;t worry about it when they been using DLVR on arrow lake first time and its designed as fail safe mechanizing to prevent the chip gonna over voltage from ring bus clocks is miles better than amd using on their SOC voltages is still catching on fire due to voltage gonna over 1.35v by OC from user error and boards are vendors are limited their boards using higher memory frequency than 6400 can handle in expo its why they add extra clock cycle for higher frequency to prevent memory asking SOC more bandwidth
Did we finally beat 10900k
Chiplets? More like Diplets.
285K paper launch, hmm...
a hard pass on arrow lake
Finally the only review that matters finally drops
Chiplet CPU matters die to die interface clock or ring bus clock as you mentioned.
I cap my monitor to 140Hz so no dips allowed for me!
"The worst of all of those", lol.
Need a chip with p cores only not the hybrid crap. no use for them in gaming. Leave that crap for content creators and laptops.
This is Intel 3rd generation chiplets. !st was a 3d stacked chiplet design, then meteor lake then arrow lake. Why lie?
what memory test is he talking about in 22:12?
karhu
@@GD-oc8vp ty
intel is doomed, 9800x3d is a beast
%35 everything from now on 😂
nice!!!
Meh, I was expecting a comparison with 13/14 gen also
Or you could at least share some pointers for the people......
45 seconds ago is arousing 🥴🥴🥴😩
Will it cook itself 🤔 ?
Intel Core Ultra 9 285K is paper launch because Intel knows that they wont sell any .........
so disabling ecore does nothing
I think the reality is quite simple. Intel contrary to popular belief of current events is quite smart. They don’t care about gaming and little whiny babies. Those guys don’t own shares. Gaming has and always will be about GPU, CPU is about compute, and in the AI age, real parallel processing power = money. Yeah, ok there is a regression in gaming…but bro, if you can see above 200fps, get off your ass, hit the gym, go to university and become a fighter pilot or F1 driver, wasting your gifts on COD. Personally from a financial standpoint and long term outlook I think AMD with their X3D and superior gaming performance is ignoring the way the market is going. Jufes I think your scale is off on the competency, 9-10 is like buildzoid, kingpin, der8auer. Guys that will solder shit to the mobo and cpu to win world records, not little jimmy in his mom’s basement that knows how to navigate an ASUS bios.
Gaming perf is a marketing tool. Intel will still make money chasing AI, but even there competition is extremely stiff with Nvidia far on top and AMD right behind. Marketing for long term mindshare is very expensive, and important for the whole of the business and that's where gaming products and perf comes in.
there are no games even worth going any higher than what I got already. ITS ALL JUST AI AND DATA CENTER SHIT FROM HERE GAMER BROS
I mean what GTA 6 whenever ???
AMDip vs. INTELag
Thanks Jufes!