@@DarioCastellarin I think many of Intel's current problems stem from the known issues and troubled development of its -Intel 7- 10 nm process, especially the power consumption.
This. We know that games love cache because of how insane AMD's X3D CPUs are in gaming workloads while showing no difference in productivity compared to the non-3D versions of their respective architectures. And that's the same story we see here with Intel too, no change in productivity from 12th to 13th gen, while we get a small but measurable boost in games.
Yeah along with untied ring frequency from ecores (essentially what controls ur l3 cache speed and latency) and also better cache prefetching. Still more changes than literally the entirety of 6-10th combined. All they did is just make the same die with extra cores and in fact the first time they went to 6 cores they had increase the gate pitch of 14nm (70->84nm) making it ~20% LESS dense from then on😂. What a joke 14nm++ was actually 14nnm- ☠️
Yup, which only affects IPC in certain kinds of workloads. 🤷 And bumping L2 by itself is notably less beneficial for gaming in particular than an equally significant (aka die sized) L3 bump ala AMD's V Cache, as the former has no impact whatsoever on core to core communication/latency as L2 is entirely exclusive per core, whereas increasing the shared L3's capacity DOES improve core to core communication & latency! You can see this crystal clear in the fact that X3D brought JUST as big of a gaming performance leap as the entirety of Zen 4! Aka X3D (doubled L3 cache) at MINIMUM matched the perf improvement not just from Zen 4's L2 size doubling (from 512KB to 1MB per core), but from the entire architectural change! If that doesn't make it clear that an L2 bump ≠ an L3 bump for gaming, I dunno what will. 🤷
The clocks went up, so Raptor Lake is actually OK - not every generation should bring IPC changes. 14th gen is weird - even for a refresh it doesn't bring enough.
Well you can say the Zen + architecture was a minor improvement over Zen. AMD are rushing (whilst extremely late) to get out the new generation, and that's spelt disaster for RDNA 3.
@@stevenwest1494 zen + had better ipc, clocks AND an entirely different design. they went from monoilithic die to chipolets. how is any of that similar to this? oh, it's not. fair point about RDNA3 as they shouldn't have even released it, but who's talking about gpus? don't wanna get started on intel's bust gpus that they have to sell for a loss because they perform about half as good as they should (at best) given their die size and transistor budget.
this BS website should be sued for misinformation of fake news... can't you guys in the US do something about it ? coz from the EU i don't think it's possible
The issue is that AMD and intel cpus are good for different things. You arent really going to be able to give one score that encapsulates those different things. i.e AMD is better for gaming because it does more work per core. Intel tends to be more suited to productivity tasks as those cpus tend to have more cores that make up for the lower amount of work that can be done per core. Userbenchmark scores are based on measuring productivity rather than the work done per core. So it isnt great if you care about gaming more than productivity tasks.
The sad part is that you'd probably be surprised by how many of that exact configs will end up being sold regardless. We here watching HWUB's coverage (and probably several other channels') might realize that that'd be the most stupid configuration to get - but the retail market works differently, both on the OEM and the consumer side.
I'd be willing to pay a little extra on the motherboard side if they lasted more than 2 true cpu generations, Rather pay for longevity rather than more motherboard flashy RGB.
Tbf that really wont be happening anyway. Absolute max is 3 gen. Pcie 6 and ddr6 are not that far away. Need new notherboards for that. Besides usually you should NOT get the first gen of a new amd socket. Zen4 motherboards have been issue after issue
Doesn't mean that other people don't do it. I went from r5 3600 to r9 5900x and it changed my experience a lot. Just because You never did it, doesn't mean others never done it.@grzegorzszewczak2808
@@h1tzzYTNot competitive in the sense that it was still firmly behind in gaming performance compared to Intel's higher end offerings at the time. Very competitive in the sense that Zen1 offered 8 cores when Intel was still shoving 4 cores (sometimes even with no HT, mind you) down consumers' throaths - and at a much lower price.
I like how the IPC debate only gets brought up when something comes out that literally doesn't move the needle or when it does A LOT. Intel needs to step it up on every level.
I'd be surprised if it wasn't like that. Things are only newsworthy when they're outside the norm, and that goes for just about everything. I might be interested in the *details* of IPC improvements when they're incremental (not huge or absent), but the fact that they exist isn't worth mentioning.
That's true. If Intel wanted to cause a positive stir, then they should have released an LGA 1700 series that has really good integrated graphics. An ryzen 5600g or 5700g style intel cpu line could turn some heads. And it would help intel leverage the amazing work they've done with their GPU's.
Intel is marketing numbers cleverly but their fab is still 10nm and supposedly MTL will be their working 7nm but look at the POWER. Look how far behind TSMC it still is.
@@JoeL-xk6boAre you comparing nm to nm? It doesn't work this way Intel 10nm is closer to tsmc 7nm DUV(if i recall it is duv) than tsmc similar name 10nm. Nm is also a worthless metric nowadays
I feel like this is actually _worse_ than that -error- - excuse me - _era_ 😂 I suspect that if we look back, we'll find _higher_ IPC jumps in those notoriously small-increment generations than we're currently seeing out of Intel.
@@DeathCoreGuitar Not exactly how it works. The 12900k is significantly slower than a 13900k. What's being show here is the performance of the chips themselves at a certain clock, as well as the efficiency at said clock
@@rustler08 Yeah yeah, I phrased it poorly, sorry. I meant that some people spend a lot of money on a new CPU (and potentially new motherboard) just to get the same architecture chips but overclocked to the sky leading to more spending on a cooling system because they are hot as hell itself and electrical bills because of a high power draw. Also 13900K is "faster" because it has 32 threads and 12900K has 24
This is exactly the video I have been waiting for as a 12900k owner! I limit my power to 140w as I use a passive case and I wanted to know if the 14th Gen chip would offer better performance at the same power limit through efficiency improvements. You have shown that there is very little point in changing. Yes the newer chips are more power efficient but like when I assessed this with 13th Gen, the improvement isn’t worth the effort. Thank you very much for making this video and congratulations on making 1,000,000 subs!
My 13700K is limited to 1.28v through an adaptive vcore. runs at an all core 5.5GHz with a max package power at 228w and 80 degrees C in Cinebench 2024 so I do feel this is partly Intel's fault for unlocking the power limit stock when it should have been vcore limited and unlocked by those who want to as the performance is stellar at 1.28v (some can do it with an even lower vcore) with the power draw at half what HUB got...I do feel HUB and others could have pointed this out as I do not see many running the K CPU's at absurd vcores that motherboards allow hitting 1.4v and above which is just stupid...
That is not correct. There is a big difference when you limit both to 140w. Reason is, 12900k at 140w cannot get anywhere near 14900k's clocks at 140w. Comparing just the Pcores alone, a 12900k at 140w might drop to 4ghz and the 14900k might easily be hitting 5ghz.
You're completely missing the point that Raptor Lake is far superior to Alder Lake due to much larger cache sizes. "Speed" is irrelevant. Your power-wasting 12900K is a boat anchor which is easily outperformed by a 13600k in almost every benchmark.
im sorry but even both limited at 200w... the 14900k or 13900k pisses all over the 12900k... hell even limited less than that.... there is a HUGE difference...
AMDs commitment to AM4 is what made me decide to switch from Intel in my latest system build. I bought in to AM5 in hopes that AMD will support that platform as long as they did with AM4. I like the ability to drop in a new CPU when I want to upgrade in a few years without having to hassle with a motherboard upgrade. You just don't get experience that with Intel.
Same. I originally bought a 4670k to kind of wait out for a 4770k or 4790k to drop in price and it didn't until the R5 3600 launched which was cheaper, much more efficient, had the SMT I was looking for. Hung onto that 3600 then dropped in a 5800X3D for the win. Try to beat that upgrade path with Intel. Intel would need a very compelling reason to go with them again, not some bragging right at the expense of power usage and heat output.
Clock for clock is the type of testing I ALWAYS love to see when new gens launch - and leave it to HUB to make it happen. Would also love to see steady state performance at specific power thresholds, as well as separate P-core and E-core performance metrics.
The E-cores have pretty bad IPC. I have a Haswell i7-4700MQ laptop chip from 2013, the IPC is 25% higher than the E-cores on my current i7-12700K. If Intel used Haswell for the E-cores, imagine how much better the perfomance per watt would be or the E-core memory latency penalty. I did a benchmark of my e-core cluster from my 12700K and it is faster than my 10 year old laptop by the clockspeed alone, not by much.
why would Intel fix this? OEMs will still buy it. They sell product no matter how good it is. They are totally insulated from the effects of competition. They. Do. Not. Care.
1000%, I was stuck on LGA1150 and basicly needed to buld a whole new PC to update. I swtiched to AMD for my current PC built in 2022. Excited to see what's posslbe on the AM5 platform.
Hey guys, just wanted to congrats you on 1 mil subs, absolutely deserved.. I think you're the most unbias (thus most reliable) hardware reviewers.. Keep up the great work! :)
Love the in depth testing as always, especially always calling out stuff that might not be obvious to new watchers. (like how important power consumption, temps, platform support/cost and driver support (ARC) can be) looking forward to new rewievs and podcasts, btw have you considered inviting guests/specialists for certain topics or will it remain a more chill conversation between yourself?
I agree that generational platform support has me looking at AM5 for a new build since it's the only current platform with an upgrade path into the future.
I totally agree with your final thoughts. I would love to update my 10900k but I'm not willing to buy another dead end motherboard with no further upgrade path.
You're right. I bought i5 11400f when I was low on budget, but now when I have more budget and some struggles with CPU on my 144Hz monitor, I could just upgrade to i7 13700, but I can't. I could do that with b450, so it was my huge mistake. Now I need to replace half of computer, instead just 1 component.
@@tilapiadave3234Man, it's not about selling 2 instead of 1. Maybe you don't want to get it. My MOBO is connected with 9234208734 cables from 354 sides, when I'm thinking about the change, it's just problematic. I'm tired of thinking about it. Much more problematic than switching new MOBO. I would like to switch CPU and forget, not unmount most of the computer. It can be done, you're right, but it's highly demotivating
This is hardly an Intel issue. This peak capitalism and planned obsolescence. No significant technological advancements have been made in the last decade, and yet shareholders in all tech companies have increased their wealth on a yearly basis.
Most people have too short an attention span to recognize these patterns. AMD would've done the same had they been the market leader for decades. The PC landscape has been doomed to begin with ever since the universal adoption of proprietary x86 architecture gave rise to the duopoly of AMD and Intel. Some people are so preoccupied with these menial dilemmas they don't even realize 99% of games never needed to be created with high end rigs in the first place. AAA publishers and silicon giants have entered a symbiotic relation completely unbeknownst to the cows they're milking.
When I was making a PC I wanted to use 12600k, but went for 12700k to, sort of, max out the platform, knowing full well I won't ever change that cpu. I don't have scenario where 13900k will make a noticable difference, so this is my pc until it's time to make a new one in probably 7-8 years. Now that I think about it, GPUs are following the same pattern. It's become more cost effective to get higher end card and use it for 7-8 years, than to update every other generation.
2016-17 I had two Motherboards. The first was an upper tier, ASUS ROG Maximus VIII- along with a Skylake 6600K. I loved this board! (around $200) my second was a budget re-manufactured board- ASUS Prime Pro X370 for My Ryzen R5-1600.(around $90) Guess which Motherboard is still being used?
Amazing - hopefully you've been able to hand off that old 6600k system to someone else... I have my old 6700k still going as an occasional bedroom HTPC...
@@alistairwillock7266 unfortunately it has an issue, that I couldn't solve. The issue is more complicated (its been a long time, I honestly forget the details. 2020 was the last time I attempted to solve) but basically it would crash/freeze after being on for some time. I think it is a 'solvable issue' because it does not freeze, if I have windows is running in safe mode. I know that, in itself, sounds like it would be easy-peasy to fix. I also remember thinking that the last time I tinkered with it(2020).
I was an early adopter of AM4 Got a Ryzen 5 1600x on launch Later bought a Ryzen 9 1800x At the end of 2019 I got a 2700x Mid way through 2020 I got a Ryzen 7 3700x And just 7 months ago I got a 5700x All on the same exact MSI X370 Gaming titanium motherboard That was “over built” back with the release of the 1800x Now, if I really want, I can still upgrade further, into a 5800x3d or into a Ryzen 9 5950x Such insane upgradability for a platform A nearly 80% single threaded uplift from the 1800x And a 170% multi threaded uplift To the 5950x Or a 100% single threaded and a 130% multi threaded uplift with the 5800x3d The AM4 platform is legendary
It wasn't a bad platform overall, I think the low end i3 and i5 were pretty interesting, decently competetive and didn't suffer as much from the power consumption issues. But yeah, AMD is still confidently in the lead...
9:42 That's amazing, you're actually describing the very opposite of how it's been for me - when I was younger and had very little money to spend on gaming PCs, I scraped every bit of cash I could to constantly upgrade or flip my older PC for newer mid-ranged hardware (So I never had any PC configuration for more than 3 years), but now I have a top-end PC with an RTX 4090 that I bought after keeping my previous PC unchanged for over 4 years, and I have absolutely ZERO plans of even thinking about a new PC for the next few years, let alone a partial upgrade. How would it even make sense?
He's describing exactly how it is for most people on midrange/enthusiast hardware. The upgrades are more spaced in time as you have less money to spend on luxuries. Whereas people who buy the top end often have a "money is no object" case.
Yeah, was similar for me. Back in the early 2000s I was upgrading at least every 2 years or so, always on a budget with low end parts. Nowadays I tend to buy more high end for my main desktop and keep it for a long time. But I also remember that I upgraded based on kind of the rule of thumb: When there's something available at about 3x the performance, I upgrade. And that was indeed within like 2 years! So my change of upgrade cycle is not mainly because I actually can afford higher end stuff now. It's kind of the other way around. Because things haven't been moving that fast it makes more sense to go more higher end and keep it longer, rather than constantly upgrading. I kept a Haswell system for like 8-9 years or so, all throughout the dark ages of Intel's quasi monopoly, until finally Zen came along and gave me a reason to upgrade (to Zen2). Just upgraded again to Zen4, but even the impressive looking gains we finally get again ever since Zen came out are still not close to the pace we had for a couple of years back then. Anyhow where was I going with this? Right, I don't think this contradicts Steve, because it's not about budget but also about how fast hardware becomes obsolete. People who are on a budget today and buy low end hardware have no reason to upgrade every gen either. Back in those days when I upgraded constantly any 5 year old system, high end or low end, would've been utterly useless. Today most 5 year old systems are still perfectly reasonable. And I can totally see on the other end of the spectrum people buying stuff like 3090Tis because they can and just want the best (some small fraction of whom may actually need it), and might therefore do it every gen.
Yeah, that's why i bought the 12700K, 7700X is 4 less cores and 700Mhz more for basically the same gaming perf. All of the memory talk nonsense on AMD meanwhile on Intel you can use low speed RAM, it doesn't hurt as much and you don't pay a premium for X3D.
Applying some actual context to a video like this would be useful. Raptor Lake was never supposed to exist in the first place (which should speak volumes about the refresh), but they figured out Meteor Lake was never going to be ready in time. Part of the cache redesign was what allowed the clockspeeds to go much higher on top of the increased capacity. The memory controller for RPL is also a pretty big improvement over ADL. While I don't disagree at all regarding platform compatibility you're ignoring reality if you think it's just an easy decision. AMD proved that you cannot keep things going forever as they dumped CPU support along the way due to limited BIOS sizes during AM4's lifetime and Intel releases significantly more SKUs per generation. You'd need to convince OEMs and motherboard makers to change up BIOS support (or go back to text only, which I'd be fine with) to mandate much higher minimum capacity before anything could conceivably change. Potentially shifting the way the BIOS works entirely would also do the trick, or convincing Intel to just pick and choose CPU support. Unfortunately I don't see any of this happening because at the end of the day it doesn't provide them with monetary benefit.
I really liked your short giggle at the beginning saying "new generation". Starting from an 8086 at 5 MHz I always went through all the various generations of Intel with two rules: 1) Second hand upgrade to the best or second best from the previous generation. (I'm cheap) 2) Upgrade only if the performance is 1.5 to 2 times that of the processor I already have. With these premises I will have to wait forever for Intel to make a huge move to carry out any upgrade with more than a few percentage points of advantage... or switch to AMD.
General question: What are justified reasons for a new socket? DRAM generation is (apparently) one but PCIe generation for example isn't. Please enlighten me 💡
I suspect the 14 and 13 series are just refinements of 12th gen in terms of yields and steppings. The additional cache may be a side effect of other improvements providing more power for additional circuits. The main criticism is the lack of interesting or useful features that demand a platform upgrade. NVMe tech and USB tech are far beyond the use case of most people, and I’m struggling to find any features that the LGA1700 platform needs but doesn’t have.
Hi Steve and Tim - The ad-spot is for the ASUS ROG Swift OLED PG27AQDM but the link in your description is for the ASUS ROG Strix XG27AQMR. I'm not looking for a monitor but I wanted to check it out so thought I'd let you know.
There's certainly exceptions with 4090 buyers, as getting the absolute best often makes it last much longer (see 1080 Ti). I certainly can't afford to upgrade every generation, but I am willing to invest into my PC as it's the most important thing in my life. So, I went from a 1080 TI to a 4090 and a 6850K to a 5950x. I expect to keep this for 3 more generations, especially as upscaling/framegen tech is now a thing. The 4090 is still CPU bottlenecked by all but the latest, fastest CPU's (depending on title). CPUs can last even longer.. though things aren't like the old days anymore where it was just IPC performance that mattered. It still matters, but now cores and cache do too, so an X3D CPU is worlds above a normal one. I will have to upgrade next gen to stop bad bottlenecking, so I'm holding out hope they figure out the multi-CCD 3D cache, without dropping clocks too. Moore's law is practically dead. Intel must stop this platform change trend; the cost is not justified.. nor is the e-waste.
I really need an upgrade over my 9700K. I was going with the 14700K or 14900K, but after all the reviews, do I need to just bite the bullet and go 7950X3D? (for gaming and not interested in the 7800X3D after all the burn issues)
The 14700K, 14900K and 7950X3D are waaaaaaay too overkill for gaming. And i'm an i7-12700K owner which is pretty overkill as well (a friend has a Ryzen 9 5900X).
1. The burn issue also happened on the 7950x3d 2. The burn issue was fixed. Months ago. 3. The 7950x3d is slower than the 7800x3d as it is basically a 7800x3d with a 7700x in the backseat trying to distract it.
Very happy with the upgrade from an i7 8700K to 13600K purchased a year ago but yea later down the road I probably won't bother with upgrading again on the same motherboard and just see whats coming next from Intel / AMD.
Time Stamp: 13:00 I totally agree.. If you just play the game you will never tell the difference between the 12900K and the 14900K... Unless you play the FPS counter in the corner game, then yeah you will see a number difference, but not a game play difference..
Looking back, not only these rebadges barely have any performance gains in them. But also they started having stability issues due to silicon either being pushed too hard, or having a flaw in the architecture. Not a good look for Intel are these Raptor-Lake CPU's
That's why I went with the 1700X oder the 7700k when it was released. After some years I replaced the 1700X with a 5600 couple of months ago. I gave the PC to someone else who's rocking it daily and bought an AM5 with 7800X3D. I wonder when I will replace that one :)
oh hella yes, i had the same journey (although with a 5900X instead of a 5600) and it's been so awesome. back in the day i did some CPU-intensive tasks too (mostly Blender, which has since been taken over by the GPU, especially since i got my first RTX card) so i got some good use out of that 1700X, and it was a friggin beast by 2017 standards. and Zen 4 is absolutely crazy, going down from 12 to 8 cores literally hasn't been a downgrade amd's platform support counts a lot. even when they mess it up -- i did switch motherboards, from my og X370 board to a B550 when upgrading the CPU, because X370 didn't have zen 3 support yet, but i gave that X370 mobo and the 1700X in it to my cousin and since then the platform did gain that support and he was able to upgrade to a 5600. there are actually CPUs from all four of the AM4 generations in the family and it's hella nice to be able to just mix and match them as needed
What about the instruction set extensions, the hybrid architecture, and RAM compatibility ? Wouldn't AVX512 coming and going, new P+E core design, and the requirement to support both DDR4 and DDR5 be a problem for the proposed motherboard that would support 10th to 14th gen Intel processors ? On the AMD side, AM4 lacks AVX512 and AM5 has it, and PCIe and RAM gen is newer of course, I wonder what will be new with AM6 other than DDR6.
What's the engineering reason of a socket upgrade? I've always thought the main one was power use. Considering how much power intel can push through lga1700, shouldn't they be able to just pop the next gen on it as well? That is unless they plan on blasting even more power.
Yea if you already own 12th or 13th gen, waiting a couple years will definitely get you more noticeable jump. I'm not sure 14th gen has ANY noticeable impact on gaming 🤣😂
Sure. But I mean from intel's perspective. Why would they need to make another socket for the next gen, if it doesn't require better power delivery? And yeah, the obvious answer would be "so they can sell chipsets to mobo vendors" but that should be the narrative then. :P @@csguak
From what I gathered from leaks, the improved power consumption on 14th gen comes mainly from a digital voltage regulator (forgot its acronym, something like DLVR) that was finally ready and enabled in 14th gen (apparently it was there in 13th gen too, but not "ready"). It's interesting that it seems to work on 14700K and 14600K, but not on 14900K. I'm wondering if there something weird there, maybe the mobo gives 14900K way too much voltage, to be sure it can run that 6 GHz. I would be curios of a same voltage and tuned voltage difference too (maybe 14th gen can use less voltage for the same clock speeds) I'm also still baffled that there is a diference between 13th and 14th gen. I could see it on 14700K, as it has more L3 cache, but for the other two ? For same clock speeds and cores, it should be the same exact performance. I'm really curious where does this difference come from. All in all, while the 14th isn't that interesting, it still looks like it is very flexible and tunable. And seeing how 14900K consumes more than 13900K in situations where it shouldn't, I can't shake the feeling that the mobos are configured to give too much voltage. Any chance of seeing a video with tuned voltage too ? And maybe some OC ? I feel like it's a lot of work, buut, if you spend the time to tune it, OC, both the CPU and memory, a 14900K can be much better than stock, and also more meaningfully better than 13900K (like 5-10%, not just 1-3%) and even come out better than a stock 7800X3D (can't say about a tuned/OC 7800X3D though)
There is nothing to confirm they got DLVR to work. Intel mentioned nothing of the sort. If they got it to work I'd expect them to mention it during the announcement since that would've been one of the only changes. Power consumption of 14th gen is still very high
@chrisdejonge611 and @UnluckyDomino you make solid points. I know about it from Moore's Law is Dead channel here on YT, which gives leaks. He said that it will come with 14th gen, and then said that it's why, for example, 14600K is both higher performance, and at a lower power draw. Though, to be fair, theoretically DLVR was rumored to enhance the power efficiency by 20%, and the numbers on 14600K seems a bit lower than that. In both Cyberpunk and Starfield, the power was reduced from 433 to 419 Watts, but that's the total system power. The performance was 1 FPS better in Cyberpunk and the same in Starfield. So 14 Watts lower. In BG3, it was from 402 to 388, another 14 Watts difference, for 1 FPS avg improvement and 2 FPS (also 2%) 1% low FPS avg improvement. Oh, forgot to check the all game averages in the 14th gen video from HUB. There, the 14600K averaged 460W, as opposed to 477W from 13600K. So 17 Watts less for an average of 2 FPS more in both avg and 1% lows. Edit: if I'm not mistakn, here the 14600K also runs like 100MHz higher. So the W difference might be a bit higher if it was configured to use the same clock speeds. Unfortunately, we don't know out of those ~400W how much the CPU is, but we can assume it's something like 120-150W. So 14-17W is roughtly a 10 to maaybe 15% improvement. That can be from node improvements, but I do think it can be the DLVR too.
Not sure Buildzoid would agree. If he cant get 8000 stable on Intel is it really an advantage for an enthusiast? , much less most people who dont tune their RAM? Also unlike AMD, Intel does not gain much performance from faster RAM.
Ring frequency being untied from ecires was pretty big too tbf especially for getting people on board with them seeing that hit to ring frequency with them on makes gaming performance mostly worse even when they were scheduled correctly. Optimum tech showed that in various competitive multiplayer games now with a 13900k lows would increase by like 15% and averages by 5% with them on now which still wouldnt happy today with 12th gen for my prior reason and they're now giving the intended mild-moderate frametime advantage to gamers that they were advertised to. Also in theory extra pcores could do the same with a perfect scheduler but the fact is without forcing Microsoft to improve it with them it wouldn't and u can see it from the uplift even in overwatch 2 that uses like 4 pcores so u had like another 4 available to do that but it clearly doesn't really help
@@Raivo_K I'm not sure either, but I'm afraid that's irrelevant to the comparison between the 13th/14th gen and the 12th gen. If, for instance, you can get DDR5 work at a certain significantly higher speed with 14900K/13900K than with 12900K, that shows their IMC is better than 12900K's regardless of whether 8000 is achieved stably or not. If the difference is satisfactorily large for enthusiasts, then it's an advantage for them.
I can't believe my 12700k that is $273 on PC part picker beats a 14600k which is going for $300. Don't buy into false advertising. Bigger number is not always better.
Intel completely devalued the 'gen' branding with this. Plus it means any benefit seen in the 15th 'gen' will be compared against two 'gens' before it. Nothing more that labelling it as a 'gen' to entice purchasing and to sell the new motherboards which are also pointless. Nothing more than a cash grab.
I've purchased *twelve* AM4 systems. 10 motherboards and 2 prebuilt. Gave away two, and two at parent's place, so eight currently at home. Zero AM5 so far, but I did buy a Phoenix laptop. It's mind-blowing that this 28W ultra portable is neck-and-neck against desktop i9-11900k for CPU performance, with way faster iGPU.
Has Intel ever committed to a platform?? Just curious, Because back when I was an Intel only guy, I can remember barely getting 2 chips per motherboard..
Back in 2021 I picked up a 11600k for my first ever build. Despite being fine with my decision even until now, the AMD longevity looks very enticing for the future.
2:50 This is only true if you consider the R7 5800X3D/R5 5600X3D to be an "entire extra year/2 of CPU support" for AM4. If you only consider the major architecture releases otoh (Zen 1, Zen +, Zen 2, Zen 3), then AM5's guaranteed support roadmap is already JUUUUST as long as AM4's was. 🤷 (AM4 = 2017 through end of 2020; vs AM5 = 2022 through end of 2025)
Man, you i7-4770/3770 folks had it good... I went from an i5-3550 (multiplier + BCLK overclocked) to an i7-12700K just this summer. $350 for CPU, RAM, and mobo. I'm cheap.
@@haydn-db8z The i5s were similar gaming performance back in the day compared to i7s. So it was the wiser choice. I actually had upgraded that MB from i5 4650 to i7 4770.
I have an i3 12100 which I got for £90 at the start of the year. I got lucky here as it fits my purpose, has a potential upgrade path, and my CPU allows for enabling AVX512 which helps significantly for RPCS3 emulation.
RPCS3 runs well on the 12100, really? It runs pretty good on my 13600k even without avx512 but I was thinking the two extra pcores and extra clocks of the 14700kf might help, with rpcs3 and yuzu for the more demanding scenarios.
Heyo, could you do a video about integrated graphics and their use cases, comparing AMD and Intel? Especially professional applications benefit from them and are a reason why people might want to pick one or the other. And i can barely find good sources about how good AMDs integrated graphics are in comparison the Intels and if they integrate similarly well as QuickSync does. Thats also a reason why the only Intel CPUs that im interested in atm are the 500s.
I almost wasted my time commenting on a random bar graph video. Thank you guys for actually answering my question of is it worth upgrading if you’re tech literate and already overclocking manually
@@Chris3s well if your looking for the lowest power consumption an IPC test isn't the place for that remember the 14900k doesn't run at 5ghz and power consumption goes up quickly when approaching maximum clock. The idea here is to compare how much of an improvement the architecture has made each generation so you limit all variables and the power numbers are just to compare each generations base efficency basically to see how much power is needed for the work done and it's not the final figure that's important but the difference between then
Can you shed more light on your findings that the 14600k total system power usage was quite a bit lower than the 13600k? There are a few other reviews with similar findings, but most see a slight increase in power usage compared to the 13600k. What are the factors contributing to this variance?
With Tick-Tock Intel paired motherboards with one processor architecture. The second generation was a shrink of the same architecture: sandy bridge & ivy bridge, Haswell & Broadwell. Then Skylake was crazy: 4 generations with very similar architectures and 2 or 3 different sockets. The IPC difference was small for 6th to 10th Gen. Core processors.
Could the fixes required to address the speculative execution side channel attack vulnerabilities be contributing to this? Have they had to spend this time getting back the lost performance
“Intel does it again with an outstanding product release, just look at that outstanding performance. All while AMD has no product release at all and can’t compete even with their marketing lies and manipulations.”
I’d love to see Intel change but I’m not holding my breath. Not that it is a big issue for me as I have tended to build a top or near top end machine and keep it for around 4 years. So guess I will be stuck with my i7-13700k/Z790 machine for another few years.
Intel is definitely never changing. They still outsell AMD by far, despite really not being that competitive. Not enough incentive to improve when you have so much guaranteed income.
Some people say that 14th gen only advantage is the enabling of "DLVR", which is supposed to reduce power consumption at same clock speed and performance. But apparently, for this advantage to be really perceptible, the socket must be power-constrained, way more than that K-series does by default. For example, it would have to be limited to something like 65W (non-K variants) in order to observe better performance at same power budget. This is a "claim" though, I haven't checked it myself, nor seen a review trying to analyze 14th gen power efficiency advantage under this angle. It would be interesting for a reviewer to have a look.
I wonder how many Intel chip designers or product managers watch stuff like this? The chip designers would probably say "yep, this is true, we really haven't innovated lately" and the product folks would say "we have to sell more chipsets, so they have to change to maintain revenue." Who knows.
a bit silly q, but would it have made more sense to clock 13th and 14th gen to match 12th gen? Wouldn't pushing 12th gen up kind of make it irrelevant? With that said...everyone can see how the 13/14th is pretty much a waste for anyone on 12th gen, unless they are thinking of maybe the 14700k variant.
In short: All AMD needs to do is support AM5 from Zen4 all the way to Zen6 and they've succeeded in making Intel look silly, again. Shouldn't be that hard, actually. AM5 has everything it needs to be adequately equipped for that timeframe. Nobody on a consumer platform will need PCIe6 or CXL, anyway.
8:56 - Corporate simping is one of the most bizarre and counterproductive behaviors ever, especially when it largely comes from people who are tech savvy enough to know better.
wait how did you get 12700 to run with faster ram. I cant seem to Xmp my ram on a z690e gaming wifi (asus) with Corsair CORSAIR VENGEANCE 6400. I am rocking 4 sticks though. Any advice?
10:05 I am not sure you can know for sure they could have skipped LGA1200. An issue is that LGA1700 introduced DDR5 which wasn't an option by the time LGA1200 released. But Intel could have decided to design their socket for better longevity, so something akin to LGA1700 was used instead of LGA1200 by that time. A dilemma would be DDR5 support which just doesn't look good to not have but then Intel could have decided to design the socket around new generation of DDR like AMD managed to do with AM4 and AM5 at least (I dunno if they have with others previously). It will be interesting if AM6 will happen by the time DDR6 comes out, so AMD continues their great longevity trend.
If arrow lake will focus on power efficiency by being on a new process (20A or TSMC 3nm) why on God’s earth will the new processor break compatibility with LGA1700 which has motherboards with extremely beefy VRMs that can supply 400+W of power to raptor lake? Surely LGA1700 can power arrow lake? Part of intel’s problem is the lack of longevity of the platform. Like I know z790 needs some upgrades like at least 4 additional pcie5.0 cpu lanes for m.2 storage so that using a pcie5.0 nvme doesn’t cut the GPU lanes to 8x, and so forth but Intel should consider keeping a platform around for 3 true generations. 14th gen is really 13.1 gen. Do in my opinion lga1700 is still just a 2 generation socket.
It seems like intel has milked as much performance as they can out of this current architecture. They can't really push power higher either.
That sounds like a challenge 🔥
Which means that this architecture was badly enginereed, with no improvement overhead and no platform reusability.
@@DarioCastellarin I think many of Intel's current problems stem from the known issues and troubled development of its -Intel 7- 10 nm process, especially the power consumption.
Surely at some point they've got to stop putting 300+W monsters on boards
Intel presented a 1000watt cooler so I wouldnt be surprised to see a 600watt 15900k
I seems pretty much the only difference between Alder Lake and Raptor Lake is the increased L2 cache going from 1.25MB to 2MB per core.
This. We know that games love cache because of how insane AMD's X3D CPUs are in gaming workloads while showing no difference in productivity compared to the non-3D versions of their respective architectures. And that's the same story we see here with Intel too, no change in productivity from 12th to 13th gen, while we get a small but measurable boost in games.
Yeah along with untied ring frequency from ecores (essentially what controls ur l3 cache speed and latency) and also better cache prefetching. Still more changes than literally the entirety of 6-10th combined. All they did is just make the same die with extra cores and in fact the first time they went to 6 cores they had increase the gate pitch of 14nm (70->84nm) making it ~20% LESS dense from then on😂. What a joke
14nm++ was actually 14nnm- ☠️
Yup, which only affects IPC in certain kinds of workloads. 🤷 And bumping L2 by itself is notably less beneficial for gaming in particular than an equally significant (aka die sized) L3 bump ala AMD's V Cache, as the former has no impact whatsoever on core to core communication/latency as L2 is entirely exclusive per core, whereas increasing the shared L3's capacity DOES improve core to core communication & latency!
You can see this crystal clear in the fact that X3D brought JUST as big of a gaming performance leap as the entirety of Zen 4! Aka X3D (doubled L3 cache) at MINIMUM matched the perf improvement not just from Zen 4's L2 size doubling (from 512KB to 1MB per core), but from the entire architectural change! If that doesn't make it clear that an L2 bump ≠ an L3 bump for gaming, I dunno what will. 🤷
Which is rather genius to save design costs :D
The clocks went up, so Raptor Lake is actually OK - not every generation should bring IPC changes. 14th gen is weird - even for a refresh it doesn't bring enough.
This is intels way of giving us 2 cpu generations per cpu socket as compared to amd
Intel: "It's 3 generations! Reeeeeeehhh!"
@@catsspatyes, and 14th gen really go brrrrrrrr.... -.-
Well you can say the Zen + architecture was a minor improvement over Zen. AMD are rushing (whilst extremely late) to get out the new generation, and that's spelt disaster for RDNA 3.
@@stevenwest1494Yea but Zen + had decent IPC and clock gains, where as this doesn’t do anything except add a number lol
@@stevenwest1494 zen + had better ipc, clocks AND an entirely different design. they went from monoilithic die to chipolets. how is any of that similar to this? oh, it's not. fair point about RDNA3 as they shouldn't have even released it, but who's talking about gpus? don't wanna get started on intel's bust gpus that they have to sell for a loss because they perform about half as good as they should (at best) given their die size and transistor budget.
Man, I cannot wait how UserBenchmark will skew this to make AMD look bad.
Divide the performance by the cache… it makes 5800x3d and 7800x3d the worst gaming CPUs ever!
😂😂😂
this BS website should be sued for misinformation of fake news... can't you guys in the US do something about it ? coz from the EU i don't think it's possible
The issue is that AMD and intel cpus are good for different things. You arent really going to be able to give one score that encapsulates those different things. i.e AMD is better for gaming because it does more work per core. Intel tends to be more suited to productivity tasks as those cpus tend to have more cores that make up for the lower amount of work that can be done per core. Userbenchmark scores are based on measuring productivity rather than the work done per core. So it isnt great if you care about gaming more than productivity tasks.
I just read their i5 13600K review, it was pure comedy gold.
@@haukionkannel The i7-12700K does beat a R7 5800X3D.
The 5800X3D was pretty overrated, the 7800X3D solved all the flaws with the first X3D chip.
These new Intel generation 14 are worthy of the RTX 4060ti
at least 14th gen is the same as the previous while the 4060ti is a downgrade
Mediocre Build 2023
@@DarioCastellarinGN already has the disappointment build every year
The sad part is that you'd probably be surprised by how many of that exact configs will end up being sold regardless.
We here watching HWUB's coverage (and probably several other channels') might realize that that'd be the most stupid configuration to get - but the retail market works differently, both on the OEM and the consumer side.
And there should be some masterpieces like Gollum installed on that system. Full house
I'd be willing to pay a little extra on the motherboard side if they lasted more than 2 true cpu generations, Rather pay for longevity rather than more motherboard flashy RGB.
Get a tomahawk board and You won't get any flashy RGB. problem solved.
And PSU too.
So buy AMD? Their platforms last WAY longer than Intel's.
Tbf that really wont be happening anyway. Absolute max is 3 gen. Pcie 6 and ddr6 are not that far away. Need new notherboards for that. Besides usually you should NOT get the first gen of a new amd socket. Zen4 motherboards have been issue after issue
Doesn't mean that other people don't do it. I went from r5 3600 to r9 5900x and it changed my experience a lot. Just because You never did it, doesn't mean others never done it.@grzegorzszewczak2808
It would be nice to have the same test for an AMD CPU's to see the difference.
1800X vs 5800X3d -> double the fps and less power consumption basically
They did already
@@SweatyFeetGirl But it gives wrong idea that 1800x had competitive gaming performance at the time of its release, which was certainly not the case.
it certainly did compared to bulldozer before it @@h1tzzYT
@@h1tzzYTNot competitive in the sense that it was still firmly behind in gaming performance compared to Intel's higher end offerings at the time.
Very competitive in the sense that Zen1 offered 8 cores when Intel was still shoving 4 cores (sometimes even with no HT, mind you) down consumers' throaths - and at a much lower price.
I like how the IPC debate only gets brought up when something comes out that literally doesn't move the needle or when it does A LOT.
Intel needs to step it up on every level.
I'd be surprised if it wasn't like that. Things are only newsworthy when they're outside the norm, and that goes for just about everything. I might be interested in the *details* of IPC improvements when they're incremental (not huge or absent), but the fact that they exist isn't worth mentioning.
That's true.
If Intel wanted to cause a positive stir, then they should have released an LGA 1700 series that has really good integrated graphics.
An ryzen 5600g or 5700g style intel cpu line could turn some heads. And it would help intel leverage the amazing work they've done with their GPU's.
Intel is marketing numbers cleverly but their fab is still 10nm and supposedly MTL will be their working 7nm but look at the POWER. Look how far behind TSMC it still is.
@@JoeL-xk6boAre you comparing nm to nm?
It doesn't work this way
Intel 10nm is closer to tsmc 7nm DUV(if i recall it is duv) than tsmc similar name 10nm.
Nm is also a worthless metric nowadays
@@789knowIt's kind of misleading when Intel calls their newer 10nm process "Intel 7" and their 7nm process is called "Intel 4".
This is like Skylake to Kaby Lake or Comet Lake to Rocket lake all over again. Basically a refresh with that minimum IPC improvements
I feel like this is actually _worse_ than that -error- - excuse me - _era_ 😂
I suspect that if we look back, we'll find _higher_ IPC jumps in those notoriously small-increment generations than we're currently seeing out of Intel.
Rocket lake did a 19% increase (integer).
Welcome to the +++++++ era.
Welcome back* to…
Absolutely insane how the difference between the 12th gen to the 14th gen is so minimal that it might aswell not exist
Yeah, and some people upgrade their rig to "newest" spending thousands of dollars and not knowing that they will get a few % of uplift
@@DeathCoreGuitar Not exactly how it works. The 12900k is significantly slower than a 13900k.
What's being show here is the performance of the chips themselves at a certain clock, as well as the efficiency at said clock
reminds me of 6th-9th gen and 10th-11th gen
IPC difference from 6700K to 11900K was almost nonexistent(HUB covered it)
@@rustler08 Yeah yeah, I phrased it poorly, sorry. I meant that some people spend a lot of money on a new CPU (and potentially new motherboard) just to get the same architecture chips but overclocked to the sky leading to more spending on a cooling system because they are hot as hell itself and electrical bills because of a high power draw.
Also 13900K is "faster" because it has 32 threads and 12900K has 24
@@rustler08it means that the 14900k is a 12900k with higher clocks via more power consumption
This is exactly the video I have been waiting for as a 12900k owner!
I limit my power to 140w as I use a passive case and I wanted to know if the 14th Gen chip would offer better performance at the same power limit through efficiency improvements. You have shown that there is very little point in changing. Yes the newer chips are more power efficient but like when I assessed this with 13th Gen, the improvement isn’t worth the effort.
Thank you very much for making this video and congratulations on making 1,000,000 subs!
My 13700K is limited to 1.28v through an adaptive vcore. runs at an all core 5.5GHz with a max package power at 228w and 80 degrees C in Cinebench 2024 so I do feel this is partly Intel's fault for unlocking the power limit stock when it should have been vcore limited and unlocked by those who want to as the performance is stellar at 1.28v (some can do it with an even lower vcore) with the power draw at half what HUB got...I do feel HUB and others could have pointed this out as I do not see many running the K CPU's at absurd vcores that motherboards allow hitting 1.4v and above which is just stupid...
That is not correct. There is a big difference when you limit both to 140w. Reason is, 12900k at 140w cannot get anywhere near 14900k's clocks at 140w. Comparing just the Pcores alone, a 12900k at 140w might drop to 4ghz and the 14900k might easily be hitting 5ghz.
You're completely missing the point that Raptor Lake is far superior to Alder Lake due to much larger cache sizes. "Speed" is irrelevant. Your power-wasting 12900K is a boat anchor which is easily outperformed by a 13600k in almost every benchmark.
@@awebuser5914 That is also not true. The 12900k is not being outperformed by a 13600k
im sorry but even both limited at 200w... the 14900k or 13900k pisses all over the 12900k... hell even limited less than that.... there is a HUGE difference...
AMDs commitment to AM4 is what made me decide to switch from Intel in my latest system build. I bought in to AM5 in hopes that AMD will support that platform as long as they did with AM4. I like the ability to drop in a new CPU when I want to upgrade in a few years without having to hassle with a motherboard upgrade. You just don't get experience that with Intel.
Same. I originally bought a 4670k to kind of wait out for a 4770k or 4790k to drop in price and it didn't until the R5 3600 launched which was cheaper, much more efficient, had the SMT I was looking for. Hung onto that 3600 then dropped in a 5800X3D for the win. Try to beat that upgrade path with Intel. Intel would need a very compelling reason to go with them again, not some bragging right at the expense of power usage and heat output.
Clock for clock is the type of testing I ALWAYS love to see when new gens launch - and leave it to HUB to make it happen. Would also love to see steady state performance at specific power thresholds, as well as separate P-core and E-core performance metrics.
The E-cores have pretty bad IPC. I have a Haswell i7-4700MQ laptop chip from 2013, the IPC is 25% higher than the E-cores on my current i7-12700K.
If Intel used Haswell for the E-cores, imagine how much better the perfomance per watt would be or the E-core memory latency penalty.
I did a benchmark of my e-core cluster from my 12700K and it is faster than my 10 year old laptop by the clockspeed alone, not by much.
why would Intel fix this? OEMs will still buy it. They sell product no matter how good it is. They are totally insulated from the effects of competition.
They. Do. Not. Care.
1000%, I was stuck on LGA1150 and basicly needed to buld a whole new PC to update. I swtiched to AMD for my current PC built in 2022. Excited to see what's posslbe on the AM5 platform.
I am still on my R1600, prefectly fine for most games with my recently bought RTX 3060 (1060 6GB was fine too, but a bit limiting)
@@Chris3s get a 5600 when you got 150 pounds spare, good solid upgrade.
its even cheaper than 150 pounds lol, its 135€/120GBP@@misterpinkandyellow74
from the tests I saw the FPS increase at 1440p is minimal (in my case even ultrawide), or did I miss something? @@misterpinkandyellow74
@@Chris3sI upgraded from a 2600 to a 5600 same 1060 6gb gpu, crysis benchmark at 1080p med settings went from 120fps to 230fps......
Hey guys, just wanted to congrats you on 1 mil subs, absolutely deserved.. I think you're the most unbias (thus most reliable) hardware reviewers.. Keep up the great work! :)
Love the in depth testing as always, especially always calling out stuff that might not be obvious to new watchers. (like how important power consumption, temps, platform support/cost and driver support (ARC) can be) looking forward to new rewievs and podcasts, btw have you considered inviting guests/specialists for certain topics or will it remain a more chill conversation between yourself?
He is the expert. If he calls someone is just to conversate.
I agree that generational platform support has me looking at AM5 for a new build since it's the only current platform with an upgrade path into the future.
I totally agree with your final thoughts. I would love to update my 10900k but I'm not willing to buy another dead end motherboard with no further upgrade path.
I guess 15th gen is your go to then, or amd if they bring out something decent for their 8000s cpus
Totally agree with your assessment my 10 900 K is more than I need for what I need so I have no desire to upgrade
You're right. I bought i5 11400f when I was low on budget, but now when I have more budget and some struggles with CPU on my 144Hz monitor, I could just upgrade to i7 13700, but I can't. I could do that with b450, so it was my huge mistake. Now I need to replace half of computer, instead just 1 component.
That’s just Intel for ya. I’m still using my X470 I bought in 2018 and just upgraded the CPU to an 5800X3D when my 2700X was starting to struggle.
@@tilapiadave3234Man, it's not about selling 2 instead of 1. Maybe you don't want to get it. My MOBO is connected with 9234208734 cables from 354 sides, when I'm thinking about the change, it's just problematic. I'm tired of thinking about it. Much more problematic than switching new MOBO. I would like to switch CPU and forget, not unmount most of the computer. It can be done, you're right, but it's highly demotivating
@@tilapiadave3234 tilapiadave is shocked to learn that consumers have preferences and they can express them in RUclips comments
So AMD achieves 10-20% each gen while Intel doesn't even get to 5% in 2 gens.
Sad.
This is hardly an Intel issue. This peak capitalism and planned obsolescence. No significant technological advancements have been made in the last decade, and yet shareholders in all tech companies have increased their wealth on a yearly basis.
Most people have too short an attention span to recognize these patterns. AMD would've done the same had they been the market leader for decades. The PC landscape has been doomed to begin with ever since the universal adoption of proprietary x86 architecture gave rise to the duopoly of AMD and Intel.
Some people are so preoccupied with these menial dilemmas they don't even realize 99% of games never needed to be created with high end rigs in the first place. AAA publishers and silicon giants have entered a symbiotic relation completely unbeknownst to the cows they're milking.
No dude, Moore's Law is dying. Watch the size for the flaghship NVIDIA/AMD cards on each generation.
LGA1700 supporting "three" generations of CPUs that are totally not the same rehashed part.
Alder Lake++ memes go brrrrr
Lets not Forget that some of 13th Gen Processors were also a Refresh of Alder Lake.
When I was making a PC I wanted to use 12600k, but went for 12700k to, sort of, max out the platform, knowing full well I won't ever change that cpu. I don't have scenario where 13900k will make a noticable difference, so this is my pc until it's time to make a new one in probably 7-8 years.
Now that I think about it, GPUs are following the same pattern. It's become more cost effective to get higher end card and use it for 7-8 years, than to update every other generation.
2016-17 I had two Motherboards.
The first was an upper tier, ASUS ROG Maximus VIII- along with a Skylake 6600K. I loved this board! (around $200)
my second was a budget re-manufactured board- ASUS Prime Pro X370 for My Ryzen R5-1600.(around $90)
Guess which Motherboard is still being used?
Amazing - hopefully you've been able to hand off that old 6600k system to someone else... I have my old 6700k still going as an occasional bedroom HTPC...
@@alistairwillock7266 unfortunately it has an issue, that I couldn't solve. The issue is more complicated (its been a long time, I honestly forget the details. 2020 was the last time I attempted to solve) but basically it would crash/freeze after being on for some time.
I think it is a 'solvable issue' because it does not freeze, if I have windows is running in safe mode.
I know that, in itself, sounds like it would be easy-peasy to fix. I also remember thinking that the last time I tinkered with it(2020).
Also I am not knocking the 6600K, It worked great up until it had issues.
It is a shame, that I couldn't upgrade to even an 8th or 9th gen CPU.
I was an early adopter of AM4
Got a Ryzen 5 1600x on launch
Later bought a Ryzen 9 1800x
At the end of 2019 I got a 2700x
Mid way through 2020 I got a Ryzen 7 3700x
And just 7 months ago I got a 5700x
All on the same exact MSI X370 Gaming titanium motherboard
That was “over built” back with the release of the 1800x
Now, if I really want, I can still upgrade further, into a 5800x3d or into a Ryzen 9 5950x
Such insane upgradability for a platform
A nearly 80% single threaded uplift from the 1800x
And a 170% multi threaded uplift
To the 5950x
Or a 100% single threaded and a 130% multi threaded uplift with the 5800x3d
The AM4 platform is legendary
It wasn't a bad platform overall, I think the low end i3 and i5 were pretty interesting, decently competetive and didn't suffer as much from the power consumption issues. But yeah, AMD is still confidently in the lead...
The i3 are heavily flawed, i don't know why they are the only ones without Big-Little. The i7-12700K is the only good i7 ever made since the 8700K.
9:42 That's amazing, you're actually describing the very opposite of how it's been for me - when I was younger and had very little money to spend on gaming PCs, I scraped every bit of cash I could to constantly upgrade or flip my older PC for newer mid-ranged hardware (So I never had any PC configuration for more than 3 years), but now I have a top-end PC with an RTX 4090 that I bought after keeping my previous PC unchanged for over 4 years, and I have absolutely ZERO plans of even thinking about a new PC for the next few years, let alone a partial upgrade. How would it even make sense?
He's describing exactly how it is for most people on midrange/enthusiast hardware.
The upgrades are more spaced in time as you have less money to spend on luxuries.
Whereas people who buy the top end often have a "money is no object" case.
Yeah, was similar for me. Back in the early 2000s I was upgrading at least every 2 years or so, always on a budget with low end parts. Nowadays I tend to buy more high end for my main desktop and keep it for a long time.
But I also remember that I upgraded based on kind of the rule of thumb: When there's something available at about 3x the performance, I upgrade. And that was indeed within like 2 years! So my change of upgrade cycle is not mainly because I actually can afford higher end stuff now. It's kind of the other way around. Because things haven't been moving that fast it makes more sense to go more higher end and keep it longer, rather than constantly upgrading. I kept a Haswell system for like 8-9 years or so, all throughout the dark ages of Intel's quasi monopoly, until finally Zen came along and gave me a reason to upgrade (to Zen2). Just upgraded again to Zen4, but even the impressive looking gains we finally get again ever since Zen came out are still not close to the pace we had for a couple of years back then.
Anyhow where was I going with this? Right, I don't think this contradicts Steve, because it's not about budget but also about how fast hardware becomes obsolete. People who are on a budget today and buy low end hardware have no reason to upgrade every gen either. Back in those days when I upgraded constantly any 5 year old system, high end or low end, would've been utterly useless. Today most 5 year old systems are still perfectly reasonable. And I can totally see on the other end of the spectrum people buying stuff like 3090Tis because they can and just want the best (some small fraction of whom may actually need it), and might therefore do it every gen.
So, Alder Lake IPC > Zen 4 IPC, and was released 1 year earlier... 😅
Yeah, that's why i bought the 12700K, 7700X is 4 less cores and 700Mhz more for basically the same gaming perf.
All of the memory talk nonsense on AMD meanwhile on Intel you can use low speed RAM, it doesn't hurt as much and you don't pay a premium for X3D.
I love the very creative ways of showing off the cpus in your B-roll. :D
Applying some actual context to a video like this would be useful. Raptor Lake was never supposed to exist in the first place (which should speak volumes about the refresh), but they figured out Meteor Lake was never going to be ready in time. Part of the cache redesign was what allowed the clockspeeds to go much higher on top of the increased capacity. The memory controller for RPL is also a pretty big improvement over ADL.
While I don't disagree at all regarding platform compatibility you're ignoring reality if you think it's just an easy decision. AMD proved that you cannot keep things going forever as they dumped CPU support along the way due to limited BIOS sizes during AM4's lifetime and Intel releases significantly more SKUs per generation. You'd need to convince OEMs and motherboard makers to change up BIOS support (or go back to text only, which I'd be fine with) to mandate much higher minimum capacity before anything could conceivably change. Potentially shifting the way the BIOS works entirely would also do the trick, or convincing Intel to just pick and choose CPU support. Unfortunately I don't see any of this happening because at the end of the day it doesn't provide them with monetary benefit.
The level of detail is greatly appreciated. Thank you!
Wow! It's so important !❤❤❤
Thank you for doing this testing!
I really liked your short giggle at the beginning saying "new generation".
Starting from an 8086 at 5 MHz I always went through all the various generations of Intel with two rules:
1) Second hand upgrade to the best or second best from the previous generation. (I'm cheap)
2) Upgrade only if the performance is 1.5 to 2 times that of the processor I already have.
With these premises I will have to wait forever for Intel to make a huge move to carry out any upgrade with more than a few percentage points of advantage... or switch to AMD.
Intel made Pentium D and the OG Pentium 4 (pretty awful), AMD had Bulldozer and Piledriver.
Pick a poison.
Not like AMD is doing much better, 10-15% single thread performance increase every 2 years will keep you waiting for a while
General question: What are justified reasons for a new socket? DRAM generation is (apparently) one but PCIe generation for example isn't.
Please enlighten me 💡
Awesome, Thanks Steve. Do you have any data for the 13900KS?
I suspect the 14 and 13 series are just refinements of 12th gen in terms of yields and steppings. The additional cache may be a side effect of other improvements providing more power for additional circuits. The main criticism is the lack of interesting or useful features that demand a platform upgrade. NVMe tech and USB tech are far beyond the use case of most people, and I’m struggling to find any features that the LGA1700 platform needs but doesn’t have.
The platform is irrelevant if the GPU market still sucks.
Hi Steve and Tim - The ad-spot is for the ASUS ROG Swift OLED PG27AQDM but the link in your description is for the ASUS ROG Strix XG27AQMR. I'm not looking for a monitor but I wanted to check it out so thought I'd let you know.
There's certainly exceptions with 4090 buyers, as getting the absolute best often makes it last much longer (see 1080 Ti). I certainly can't afford to upgrade every generation, but I am willing to invest into my PC as it's the most important thing in my life. So, I went from a 1080 TI to a 4090 and a 6850K to a 5950x. I expect to keep this for 3 more generations, especially as upscaling/framegen tech is now a thing. The 4090 is still CPU bottlenecked by all but the latest, fastest CPU's (depending on title).
CPUs can last even longer.. though things aren't like the old days anymore where it was just IPC performance that mattered. It still matters, but now cores and cache do too, so an X3D CPU is worlds above a normal one. I will have to upgrade next gen to stop bad bottlenecking, so I'm holding out hope they figure out the multi-CCD 3D cache, without dropping clocks too.
Moore's law is practically dead. Intel must stop this platform change trend; the cost is not justified.. nor is the e-waste.
"intel's 15th gen needs to offer a nice performance uplift" about that...
I really need an upgrade over my 9700K. I was going with the 14700K or 14900K, but after all the reviews, do I need to just bite the bullet and go 7950X3D? (for gaming and not interested in the 7800X3D after all the burn issues)
Any of those cpu's are good so go with what you want. I have a 4090/ 7800x3d/ 64 gigs of ddr5 and my pc is insane.
The 14700K, 14900K and 7950X3D are waaaaaaay too overkill for gaming. And i'm an i7-12700K owner which is pretty overkill as well (a friend has a Ryzen 9 5900X).
1. The burn issue also happened on the 7950x3d
2. The burn issue was fixed. Months ago.
3. The 7950x3d is slower than the 7800x3d as it is basically a 7800x3d with a 7700x in the backseat trying to distract it.
fixed if you update bios lol @@TonyChan-eh3nz
The peanuts in the background at 3:51 are brutal 🤣
I didnt even notice them xD
Very happy with the upgrade from an i7 8700K to 13600K purchased a year ago but yea later down the road I probably won't bother with upgrading again on the same motherboard and just see whats coming next from Intel / AMD.
This summer, I needed a new CPU/MB. After some research, I bought an i7-12700k. This video confirmed that I made the right choice. Thanks guys!
I did the same. The only thing potentially holding my PC back is the DDR4 RAM, but it was a very nice combo deal, so I went for it.
Time Stamp: 13:00 I totally agree.. If you just play the game you will never tell the difference between the 12900K and the 14900K... Unless you play the FPS counter in the corner game, then yeah you will see a number difference, but not a game play difference..
As someone with a 12900k and thought about upgrading to a 14900k, thanks! You just saved me quite a bit of money!
14th Generation should have been called "Older Lake"
Solid vid, exactly the info we need.
Looking back, not only these rebadges barely have any performance gains in them. But also they started having stability issues due to silicon either being pushed too hard, or having a flaw in the architecture. Not a good look for Intel are these Raptor-Lake CPU's
That's why I went with the 1700X oder the 7700k when it was released. After some years I replaced the 1700X with a 5600 couple of months ago. I gave the PC to someone else who's rocking it daily and bought an AM5 with 7800X3D. I wonder when I will replace that one :)
oh hella yes, i had the same journey (although with a 5900X instead of a 5600) and it's been so awesome. back in the day i did some CPU-intensive tasks too (mostly Blender, which has since been taken over by the GPU, especially since i got my first RTX card) so i got some good use out of that 1700X, and it was a friggin beast by 2017 standards. and Zen 4 is absolutely crazy, going down from 12 to 8 cores literally hasn't been a downgrade
amd's platform support counts a lot. even when they mess it up -- i did switch motherboards, from my og X370 board to a B550 when upgrading the CPU, because X370 didn't have zen 3 support yet, but i gave that X370 mobo and the 1700X in it to my cousin and since then the platform did gain that support and he was able to upgrade to a 5600. there are actually CPUs from all four of the AM4 generations in the family and it's hella nice to be able to just mix and match them as needed
I went from 3800X to 5800X3D. A single, but massive upgrade with the same motherboard and RAM.
Wait, did this video ignore the temps and wattage? 😅 Der8auer showed that there are latent improvements. Latent, that is, in his delid video.
Did you guys change LUT and add more sharpness? The image feels so much more crisper and deeper now. Very good change
What about the instruction set extensions, the hybrid architecture, and RAM compatibility ? Wouldn't AVX512 coming and going, new P+E core design, and the requirement to support both DDR4 and DDR5 be a problem for the proposed motherboard that would support 10th to 14th gen Intel processors ? On the AMD side, AM4 lacks AVX512 and AM5 has it, and PCIe and RAM gen is newer of course, I wonder what will be new with AM6 other than DDR6.
What's the engineering reason of a socket upgrade? I've always thought the main one was power use. Considering how much power intel can push through lga1700, shouldn't they be able to just pop the next gen on it as well? That is unless they plan on blasting even more power.
Yea if you already own 12th or 13th gen, waiting a couple years will definitely get you more noticeable jump. I'm not sure 14th gen has ANY noticeable impact on gaming 🤣😂
Sure. But I mean from intel's perspective. Why would they need to make another socket for the next gen, if it doesn't require better power delivery?
And yeah, the obvious answer would be "so they can sell chipsets to mobo vendors" but that should be the narrative then. :P @@csguak
I dont rhink intel will ever match how great AM4 is
From what I gathered from leaks, the improved power consumption on 14th gen comes mainly from a digital voltage regulator (forgot its acronym, something like DLVR) that was finally ready and enabled in 14th gen (apparently it was there in 13th gen too, but not "ready"). It's interesting that it seems to work on 14700K and 14600K, but not on 14900K. I'm wondering if there something weird there, maybe the mobo gives 14900K way too much voltage, to be sure it can run that 6 GHz. I would be curios of a same voltage and tuned voltage difference too (maybe 14th gen can use less voltage for the same clock speeds)
I'm also still baffled that there is a diference between 13th and 14th gen. I could see it on 14700K, as it has more L3 cache, but for the other two ? For same clock speeds and cores, it should be the same exact performance. I'm really curious where does this difference come from.
All in all, while the 14th isn't that interesting, it still looks like it is very flexible and tunable. And seeing how 14900K consumes more than 13900K in situations where it shouldn't, I can't shake the feeling that the mobos are configured to give too much voltage. Any chance of seeing a video with tuned voltage too ? And maybe some OC ? I feel like it's a lot of work, buut, if you spend the time to tune it, OC, both the CPU and memory, a 14900K can be much better than stock, and also more meaningfully better than 13900K (like 5-10%, not just 1-3%) and even come out better than a stock 7800X3D (can't say about a tuned/OC 7800X3D though)
There is nothing to confirm they got DLVR to work. Intel mentioned nothing of the sort. If they got it to work I'd expect them to mention it during the announcement since that would've been one of the only changes. Power consumption of 14th gen is still very high
@chrisdejonge611 and @UnluckyDomino you make solid points. I know about it from Moore's Law is Dead channel here on YT, which gives leaks. He said that it will come with 14th gen, and then said that it's why, for example, 14600K is both higher performance, and at a lower power draw.
Though, to be fair, theoretically DLVR was rumored to enhance the power efficiency by 20%, and the numbers on 14600K seems a bit lower than that. In both Cyberpunk and Starfield, the power was reduced from 433 to 419 Watts, but that's the total system power. The performance was 1 FPS better in Cyberpunk and the same in Starfield. So 14 Watts lower. In BG3, it was from 402 to 388, another 14 Watts difference, for 1 FPS avg improvement and 2 FPS (also 2%) 1% low FPS avg improvement.
Oh, forgot to check the all game averages in the 14th gen video from HUB. There, the 14600K averaged 460W, as opposed to 477W from 13600K. So 17 Watts less for an average of 2 FPS more in both avg and 1% lows. Edit: if I'm not mistakn, here the 14600K also runs like 100MHz higher. So the W difference might be a bit higher if it was configured to use the same clock speeds.
Unfortunately, we don't know out of those ~400W how much the CPU is, but we can assume it's something like 120-150W. So 14-17W is roughtly a 10 to maaybe 15% improvement. That can be from node improvements, but I do think it can be the DLVR too.
Perhaps one of a few advantages of the 13th/14th gen over the 12th gen is their better memory controller.
Not sure Buildzoid would agree. If he cant get 8000 stable on Intel is it really an advantage for an enthusiast? , much less most people who dont tune their RAM?
Also unlike AMD, Intel does not gain much performance from faster RAM.
@@Raivo_K x3d chips barely gain anything from faster RAM due to the cache doing the heavy lifting.
Ring frequency being untied from ecires was pretty big too tbf especially for getting people on board with them seeing that hit to ring frequency with them on makes gaming performance mostly worse even when they were scheduled correctly. Optimum tech showed that in various competitive multiplayer games now with a 13900k lows would increase by like 15% and averages by 5% with them on now which still wouldnt happy today with 12th gen for my prior reason and they're now giving the intended mild-moderate frametime advantage to gamers that they were advertised to. Also in theory extra pcores could do the same with a perfect scheduler but the fact is without forcing Microsoft to improve it with them it wouldn't and u can see it from the uplift even in overwatch 2 that uses like 4 pcores so u had like another 4 available to do that but it clearly doesn't really help
@@xblur17That’s not true. X3D still noticeably benefits from more memory bandwidth by overlocking the infinity fabric.
@@Raivo_K I'm not sure either, but I'm afraid that's irrelevant to the comparison between the 13th/14th gen and the 12th gen. If, for instance, you can get DDR5 work at a certain significantly higher speed with 14900K/13900K than with 12900K, that shows their IMC is better than 12900K's regardless of whether 8000 is achieved stably or not. If the difference is satisfactorily large for enthusiasts, then it's an advantage for them.
I can't believe my 12700k that is $273 on PC part picker beats a 14600k which is going for $300. Don't buy into false advertising. Bigger number is not always better.
Intel completely devalued the 'gen' branding with this. Plus it means any benefit seen in the 15th 'gen' will be compared against two 'gens' before it.
Nothing more that labelling it as a 'gen' to entice purchasing and to sell the new motherboards which are also pointless. Nothing more than a cash grab.
I've purchased *twelve* AM4 systems. 10 motherboards and 2 prebuilt. Gave away two, and two at parent's place, so eight currently at home.
Zero AM5 so far, but I did buy a Phoenix laptop. It's mind-blowing that this 28W ultra portable is neck-and-neck against desktop i9-11900k for CPU performance, with way faster iGPU.
Has Intel ever committed to a platform?? Just curious, Because back when I was an Intel only guy, I can remember barely getting 2 chips per motherboard..
Nope
yes a long time ago with the 775 socket but it was a compatibility mess with the third party chipsets.
Back in 2021 I picked up a 11600k for my first ever build. Despite being fine with my decision even until now, the AMD longevity looks very enticing for the future.
Get b650 with 7500f, and then upgrade at zen5 or maybe zen6 with a x3d part
@@noticing33 I love this community
ew 7500f@@noticing33
2:50 This is only true if you consider the R7 5800X3D/R5 5600X3D to be an "entire extra year/2 of CPU support" for AM4. If you only consider the major architecture releases otoh (Zen 1, Zen +, Zen 2, Zen 3), then AM5's guaranteed support roadmap is already JUUUUST as long as AM4's was. 🤷
(AM4 = 2017 through end of 2020; vs AM5 = 2022 through end of 2025)
I was on a i7 4770 and upgraded to a i3 12100f. What an upgrade for what tiny price!
I upgraded from an i7-4700MQ laptop chip (basically it's an i7-3770) to an i7-12700K (330USD).
Man, you i7-4770/3770 folks had it good... I went from an i5-3550 (multiplier + BCLK overclocked) to an i7-12700K just this summer. $350 for CPU, RAM, and mobo. I'm cheap.
@@haydn-db8z The i5s were similar gaming performance back in the day compared to i7s. So it was the wiser choice. I actually had upgraded that MB from i5 4650 to i7 4770.
When come the Z790/B760 VRM test out ?
I have an i3 12100 which I got for £90 at the start of the year. I got lucky here as it fits my purpose, has a potential upgrade path, and my CPU allows for enabling AVX512 which helps significantly for RPCS3 emulation.
RPCS3 runs well on the 12100, really? It runs pretty good on my 13600k even without avx512 but I was thinking the two extra pcores and extra clocks of the 14700kf might help, with rpcs3 and yuzu for the more demanding scenarios.
Heyo, could you do a video about integrated graphics and their use cases, comparing AMD and Intel?
Especially professional applications benefit from them and are a reason why people might want to pick one or the other. And i can barely find good sources about how good AMDs integrated graphics are in comparison the Intels and if they integrate similarly well as QuickSync does.
Thats also a reason why the only Intel CPUs that im interested in atm are the 500s.
I almost wasted my time commenting on a random bar graph video. Thank you guys for actually answering my question of is it worth upgrading if you’re tech literate and already overclocking manually
When testing the power usage, why disable the E-cores?
Because E-Core numbers vary across the generations
sorry can you explain how can that affect it? I am just not sure why disable them when looking to see the lowest consumption @@shaynegadsden
@@Chris3s well if your looking for the lowest power consumption an IPC test isn't the place for that remember the 14900k doesn't run at 5ghz and power consumption goes up quickly when approaching maximum clock.
The idea here is to compare how much of an improvement the architecture has made each generation so you limit all variables and the power numbers are just to compare each generations base efficency basically to see how much power is needed for the work done and it's not the final figure that's important but the difference between then
Can you shed more light on your findings that the 14600k total system power usage was quite a bit lower than the 13600k? There are a few other reviews with similar findings, but most see a slight increase in power usage compared to the 13600k. What are the factors contributing to this variance?
The voltage.
14600k Nooa higher so use more power. In here they limit the speed same so 14600k remains lower speed it normaalilyseo does and so use less power.
I'd love to see an AMD IPC video too.
could be interesting to see this test with e-cores enabled, at the same frequency, to show how they impact in modern games...
It depends, in Cyberpunk 2077, they do A LOT.
They also do wonders on shader compilation.
With Tick-Tock Intel paired motherboards with one processor architecture. The second generation was a shrink of the same architecture: sandy bridge & ivy bridge, Haswell & Broadwell. Then Skylake was crazy: 4 generations with very similar architectures and 2 or 3 different sockets. The IPC difference was small for 6th to 10th Gen. Core processors.
Intel's position: If we release a yearly cadence, we can hide the lack of performance increases.
Nice summary of what they already said. 😊😊😊
yes there is no difference between 10th-12th and 13th gen current i5s shit on old i9s but yes lack of performance increase.
Could the fixes required to address the speculative execution side channel attack vulnerabilities be contributing to this? Have they had to spend this time getting back the lost performance
I can’t wait to see how userbenchmark spins this one.
“Intel does it again with an outstanding product release, just look at that outstanding performance. All while AMD has no product release at all and can’t compete even with their marketing lies and manipulations.”
Really good content here. As usual, a must watch. I really agree with your analysis. Thanks a lot Steve!
Excellent video! I totally agree with the conclusion.
So I guess I'll stay on my 12900K CPU. I'm on a 4090 and DDR4, wonder if is worthy to change mobo and go for DDR5.
One year later, stuck with my 12900K. Recently managed to OC'd to 53 all cores.
Why have you locked the ring at 3ghz ? Even the 12th Gen will do 4.5 with the e cores off
I’d love to see Intel change but I’m not holding my breath. Not that it is a big issue for me as I have tended to build a top or near top end machine and keep it for around 4 years. So guess I will be stuck with my i7-13700k/Z790 machine for another few years.
Still a really good cpu, for awhile.
Intel is definitely never changing. They still outsell AMD by far, despite really not being that competitive. Not enough incentive to improve when you have so much guaranteed income.
Some people say that 14th gen only advantage is the enabling of "DLVR", which is supposed to reduce power consumption at same clock speed and performance.
But apparently, for this advantage to be really perceptible, the socket must be power-constrained, way more than that K-series does by default.
For example, it would have to be limited to something like 65W (non-K variants) in order to observe better performance at same power budget.
This is a "claim" though, I haven't checked it myself, nor seen a review trying to analyze 14th gen power efficiency advantage under this angle.
It would be interesting for a reviewer to have a look.
to be brutally honest i never just upgrade just cpu by its own, when the time justified a cpu change, it's usually the time to change everything
I wonder how many Intel chip designers or product managers watch stuff like this? The chip designers would probably say "yep, this is true, we really haven't innovated lately" and the product folks would say "we have to sell more chipsets, so they have to change to maintain revenue." Who knows.
Both uArch and process node are not great at intel, they’re hiding behind marketing and desktop part power draw.
Why the hell was the ring bus locked at 3 GHz? Faster ring bus (especially with the E-cores on) is the major benefit of RPL compared to ADL.
a bit silly q, but would it have made more sense to clock 13th and 14th gen to match 12th gen? Wouldn't pushing 12th gen up kind of make it irrelevant?
With that said...everyone can see how the 13/14th is pretty much a waste for anyone on 12th gen, unless they are thinking of maybe the 14700k variant.
In short: All AMD needs to do is support AM5 from Zen4 all the way to Zen6 and they've succeeded in making Intel look silly, again.
Shouldn't be that hard, actually. AM5 has everything it needs to be adequately equipped for that timeframe. Nobody on a consumer platform will need PCIe6 or CXL, anyway.
8:56 - Corporate simping is one of the most bizarre and counterproductive behaviors ever, especially when it largely comes from people who are tech savvy enough to know better.
wait how did you get 12700 to run with faster ram. I cant seem to Xmp my ram on a z690e gaming wifi (asus) with Corsair CORSAIR VENGEANCE 6400. I am rocking 4 sticks though. Any advice?
Try 2 sticks
4 sticks are always harder to run fast as opposed to 2 sticks (and a mobo with only 2 sticks).
Still on am4 platform pretty sure I'll drop the 5800x3d in there at some point and ignore new cpus 😅
Forget about increasing scores
They had increase the price!
Intel we renename it rise the prices and here we go new product
10:05 I am not sure you can know for sure they could have skipped LGA1200. An issue is that LGA1700 introduced DDR5 which wasn't an option by the time LGA1200 released. But Intel could have decided to design their socket for better longevity, so something akin to LGA1700 was used instead of LGA1200 by that time.
A dilemma would be DDR5 support which just doesn't look good to not have but then Intel could have decided to design the socket around new generation of DDR like AMD managed to do with AM4 and AM5 at least (I dunno if they have with others previously).
It will be interesting if AM6 will happen by the time DDR6 comes out, so AMD continues their great longevity trend.
6:30 Intel got dlvr voltage regulation working in 14th gen. You cant tell on the 14900k, but it clearly works on the chips not pushed so hard
Intel: were always moving...
Laterally.
Would love to see a Ryzen comparison for the previous gen, see how far AMD has come!
Can you please do a video comparing Zen 4 & LGA 1700 based on clock for clock performance?
If arrow lake will focus on power efficiency by being on a new process (20A or TSMC 3nm) why on God’s earth will the new processor break compatibility with LGA1700 which has motherboards with extremely beefy VRMs that can supply 400+W of power to raptor lake? Surely LGA1700 can power arrow lake?
Part of intel’s problem is the lack of longevity of the platform. Like I know z790 needs some upgrades like at least 4 additional pcie5.0 cpu lanes for m.2 storage so that using a pcie5.0 nvme doesn’t cut the GPU lanes to 8x, and so forth but Intel should consider keeping a platform around for 3 true generations. 14th gen is really 13.1 gen. Do in my opinion lga1700 is still just a 2 generation socket.
Intel has issues catching up to TSMC. They are always behind.