I always set my GPU fan speed to the minimum that they won't turn off in MSI Afterburner, the noise when they turn on and off is so distracting, and probably could shorten the lifespan of the fan? I don't know.
4:21 Some caveats to this, because in some cases you actually get a performance increase when disabling hyperthreading because the core is not split between running two tasks
TBH power draw was kinda getting out of hand and is still out of hand with the GPUs, it is nice to see it getting better, we shouldn't need a dedicated circuit to run our computers in our houses.
yeah if you running intel cpu 350 watts , but Amd had 80 watt 7800x3d cpu for long time , looks like you just buy intel only so you might need powerplant in the house
Intel ditched HT due to 3 reasons. 1) Sidechannel attacks. 2) HT requires doubling the register file among other things - this saves die space. 3) Intel was planning cores that can be partitioned in-situ as part of the Royal Core architecture. Ripping out HT is a first step towards that. It's unclear whether Royal Core will ever come to market as planned though, as the project wasn't completed before Jim Keller left.
@@BozesanVlad X86S lies somewhere in the future. Probably well past 2034. Also, the instruction set size of x86 isn't really that much of a problem. Modern instructions hit a dedicated fast path and the old stuff remains supported through microcode mappings, but isn't emitted often by compilers anymore anyways. CISC applications tend to be smaller and require less I$ and I$ bandwidth btw., because loads and stores are most often an implicit part of other instructions instead of having to be explicitly added by the compiler. At the same time, loads/stores can still be optimized, i.e. performed out of order and/or batched, once the instruction has been broken down into microOPs inside the CPU.
@@Psychx_ In short, they made them GPUs as FPGA, and will do them CPUs the same. Not in 2034, but a few years ago (GPU). Isn't even something they hided, is in them drivers and on the intel site.
I like the direction they are going with the efficiency, not only because it saves power, but because that is a passive way of battling heat. Which in a way directly translates to less throttling and more stability and durability. If we have learned anything from the 2 previews Intel gens, going all the way in power for sake of performance is not that wise.
Lol, so if you are a gamer, you are paying alot more for the new intel cpus while getting less in performance?. What a joke. The 9800x3d is the only way to go. I couldn't care less about productivity benchmarks.
Hmm.. I like durability as an argument for a new CPU generation. Wish my Intel Q6600 from 2007 that i still use to play old games had that!.. Are you for real?
Yes and no. Yes that Intel's previous generations of CPU are using too much power and producing too much heat that it becomes a problem, so yes they needed to lower the power consumption. But at the same time no, its a fail because they had a node advantage over AMD and this is the best they can come up with? Heck, they probably would have similar results if they made 14th Gen CPUs on the new node to reduce power consumption while maintaining the same performance. Efficiency is great if there is performance increase. This is the same problem with AMD, going for efficiency for little to no gains with Zen5. That's why people complained and AMD released a 105W option. The problem with Intel is that it is using a newer node, had the advantage and still couldn't win. In fact, it is still going to consume more power than AMD counterparts. Not to mention X3D chips will retain the lead, and 9800X3D will pull further ahead when it comes out. Intel needed efficiency AND performance improvement in order to compete. Unfortunately it couldn't achieve both.
13:20 - "But we can't 'get' anywhere in the future, if we don't 'start' somewhere. So if we don't start 'here', then we'll never get to 'there'." - Jay 2024
Thank you for pre mashing my keyboard for me. Power consumption helps with reducing the ridiculous cooler sizes, help noise and form factor. i.e. It helps make ITX viable because it makes low profile coolers more viable. I like these developments from both Intel and AMD
Small form factor is fun, and to really make something small, the cooler is never optimal. Power supplies are also lower capacity. In that world that often involves undervolting and heat management, these new improvements are huge.
not to mention portability and how much it will help battery life on laptops!! before they had to have there own "m" series but some laptops still use desktop processers but suffer greatly from power draw in bat life and heat
Exactly this I'm wanting to move over to itx at some stage purely because what's the point in large towers anymore? We no longer have SLI and we have NVME drives and I hate how much desk space my PC takes up 🤣
I have a 8700k system that's about 6 years old at this point. Which still works just fine for productivity stuff. I've been really interested to see how Arrow Lake stacks up, but maybe I'll wait for the second generation of these chips.
Same CPU. I was thinking of upgrading due to the new FLIGHT SIM releasing and a new cpu would be paired nicely with my 3080. Might just overlock it slightly and wait for the next gen
@@mrskizzlewizzle I might be judged for that for i'm going for the intel Core Ultra 265k, 64gb of ram, a few ssd's gen4 of 2tb each and a RTX 4080 Super. I can live with the risk of the new platform.
I want to point out on the multi-threading graph at 6:30 that they mention this was tested at 125W approximately. So the 14900k or 9950X may perform better at higher wattages. I am happy they are improving the performance per watt, but I would be surprised if the Ultra 9 285k took the multi-threading crown.
I doubt that the NPU will do much. The limiting factor is memory bandwidth, similar to how graphics are always better on a dedicated gpu with high-bandwidth gddr6. What the NPU could do is reduce the power profile of low intensity neural processing, i.e. increase efficiency. But because neural processing isn't already power limited, it won't be much.
@HyperionZero Cores can always clock down to meet lower power requirements. But they could still make another SKU that's just for desktop high performance needs.
@HyperionZero Of course CPUs are not only for gamers. However, it might be a good idea to develop more specialized CPUs for different tasks beside an allrounder CPU.
13:53, What happens to performance when you go above 1080P? Most people are now going to 1440P cause its not any more than a 1080 monitor now. I'm guessing the performance is much worse and that is why they only show 1080.
the only way any market share will shift is IF INTEL DITCHES THE DUMBASSERY THAT IS CHANGING SOCKET EVERY 2-3 GENERATIONS SO WE ARE FORCED TO BUY NEW MOTHERBOARDS AS WELL. My 12900KS is the last Intel CPU I will be using, buying, or even thinking of purchasing in the future.
If you really have a 12900KS then you do not need to upgrade your PC in about a decade. CPU's last a very, very long time. Most people complaining about that fact either do not have a computer themselves or are extremely wealthy and stupid with their money.
Happy with my 13900K and Z690. I think it will serve me well for 4 or 5 more years. However, I will definitely jump from 4090 to 5090 when it comes out.
Gaming comparison vs. the 9950X (6:48)? Content creation / Desktop performance comparison vs. the 7950X3D (7:12)? It's clear Intel has little to offer this time. On the other hand: This seems to be Intel's own Zen1-moment and we can look forward to what the future will bring.
The issue with 13/14th gen, the brand new socket neeeded with potentially short lifetime as intel announced skipping a gen to focus on a completely different sku and no real gen to gen gain ? I'm currenlty slowly saving to get a desktop in 6 months as I'm finally settling somewhere and I'm definitely leaning AM5 Thanks for the video :)
I like to see both AMD and Intel going the route of forgoing some performance for efficiency. We're at a point of diminishing returns with frame rates on modern hardware especially with DLSS, FSR, and XeSS. I mean 75fps in an RPG is pretty damn good and smooth and so many competitive shooters are so well optimized old hardware can get 144+fps on them. The returns for efficiency are much higher than the playable experience in my opinion. Pair that with the 5090 claiming to draw up to 600W, it makes perfect sense. A computer drawing 900W between the CPU and GPU is just insane and completely unnecessary when you consider the point of diminishing returns.
more a limit of a certain methodology of processors in the x86 architecture. chiplet design will bring benefits that the previous methods didn't allow but this iteration is going to make it seem like Intel were taking steps backward too
@@tim3172 Exactly. And at some point it will become a reality. You cannot deny clock speeds have practically stalled and there's only so many ways around that.
@@tim3172i think the argument holds more weight this go around as it isn't just cpus. gpu generations have significantly increased power consumption to maintain their generational improvements
The thing about Hyperthreading/Simultaneous Multithreading, the standard 2 thread CPU core has at least 4 pipelines (4 processing pipes). One pipeline is the primary thread. One pipeline is a secondary thread. Two pipelines are hardware acceleration ASIC threads that are loaded operations from the CPU firmware, for example the code for SSE instructions run on these extra pipelines. All of these pipelines share the ALUs and FPU(s) inside the core. In order to run 4 of these threads simultaneously, there needs to be enough execution units in the core to serve these pipelines. Ripping out hyperthreading allows the CPU developer to shrink/simplify the core to just 1 threaded pipeline and at least 1 hardware acceleration pipeline. This reduces the number of ALUs in each core, as you have fewer "customers per core" to serve. The next factor to drive Intel to drop Hyperthreading, or at least kill their simultaneous multithreading method, is all the instruction execution security flaws and potentially looking at caching and memory access improvements. If there are cons to Hyperthreading/SMT, why did CPU engineers incorporate it into the CPU design? The purpose of multiple threads simultaneously running per core is about performance per square millimeter of silicon. Often on a CPU, a single instruction or operation can only utilize 1 execution unit in the CPU in a clock cycle, leaving the rest of that silicon unused. Each CPU core has more than just ALUs and FPUs because there are other non-logic operations to call. Because each core can only execute 1 instruction at a time, there are unused resources. Thus the purpose of SMT/HT is merging 2 CPU cores to share underused resources and execution cores, reducing the landscape of having 2 single thread cores into 1 2-thread core. The real problem of SMT/HT is something more specific to the x86 series of CPUs, backwards compatibility. In order to maintain backwards compatibility so the next generation of CPUs can run last year's operating systems and games, the same core instruction set and syntax must be maintained. The original 8086 CPU was designed to directly access RAM and didn't consider RISC pipeline staging or multithreading. The modern CPU has to translate the old system design onto newer, more sophisticated structures, and that translation step is overhead.
I'm running a 10700k with a 3080ti - after all of the headache I'm seeing with the 13/14gen Intel processors, I'm heavily being pushed back into the AMD camp. The 7800x3D seems like a very reasonable upgrade. (Trying to keep things on the somewhat cheaper side while still getting the best performance I can)
If your waiting for a possible cheaper option I think the 9600x3d could be a potential cheaper alternative to the 7800x3d, assuming when it comes out that it’s available to the general audience
It seems both Intel and AMD reached a point where performance can’t go any higher at the point so they’re competing to see who can come up with worse naming schemes, Intel seemed to be in the lead with the whole core ultra thing but AMD is actually pulling ahead cause they went with 300 series for no reason other than its a bigger number that Intel’s 200 series
The biggest concern I have with the new CPU's is the inclusion of the NPU's. When I bought my new laptop I purposely stayed away from any of the units with the NPU's due to Microsoft's ridiculous implementation of Recall and the HUGE security concerns that come with it and the fact that you cannot turn off Recall.
I'm curious to see what the performance of Intel's new cpu's will be like. But I'll still end sticking with a 9800X3D, I mainly game and watch YT so an X3D chip is the best choice.
Tiles have a greater density of interconnects so they have a tiny slice of bragging rights - but as you suggest, certainly not enough to excuse their past criticism
AMD: We created FX! It's 1 core per thread. Intel: Haha! We can do multithreading and we're faster! AMD: We have Zen and it's multithreading capable with 2 threads per core. Intel: We took FX and cloned it... AMD: Um... 😳 WTF!?!
If the performance per watt is as they say, I think that's a v.positive step for Intel. It could be a nice upgrade for people on 13th or 14th gen, and a GREAT upgrade for people with far older CPUs.
you pay like 200$ extra for like 2 fps difference its like paying 100$'s for 1fps avg difference if you have like a year old whatever cpu these cpu's are not it imo but since i got a i76700 which i bought like in 2015 its about time for me to upgrade lol ill just go for the cheap midrange one and not pay 2/300 bucks extra for a 1/2 fps difference lol
I am beginning to think unless if some major breakthrough occurs, we are reaching the limits with current technology. The days of large improvement jumps are behind us.
How does it compare to 14900ks though,im not sure if i wanna switch if the only difference is maybe 5 to 10% performance and a bit cooler/less power (cooling and power isnt really an issue for me ,hottest cpu gets for the type of games i play is 70 to 80f,hell some of the games i play it doesnt go above 58f). Thinking of waiting till the next generation after this.
I mean when you wanted the best of the best before HEDT died with the 980X, 3960X, 4960X 5960X etc. you were paying usually around the 1K mark. Sure they were more equipped as well when it comes to PCI-E lanes and triple or quad-channel memory but they didn't by far have as many physical cores as we have now. I believe they came from the same production lines as the xeons though so I mean some of the cost is explainable, but I think the current pricings isn't exactly super terrible. Don't forget that there's inflation and raised taxes on luxury products in some countries, the i7 950 also roughly costed about 260 to 280 euros back in 2010. Fortunately motherboards weren't crazily priced even though they were very well equipped with a rich featureset we only see back on expensive threadripper/xeon platforms now. I miss the old motherboard times.
I’m cautiously excited for the new Intel stuff. I’ve been an AMD guy for a long time, but this definitely smells like their Zen moment. As much as I like AMD, I also like competition so neither company gets complacent.
I'm on a 6600k and all i'm hearing is let someone else test this one out come back in 2 generations. If I buy a new CPU its still more worthwhile to buy AMD cpu or and 12th gen Intel and wait another 10 years before going down this route. (I'd still use the 6600K but win 11 kicking me off my PC.)
Shit me too😂 i undervolted -24 and it runs faster and idk why but idk if its just Am5 but the boot up times are hella slow but i got it from 45 sec to 14.5 sec
@@Luckytmj97 Mine is fully stock, but the temps are more than fine for me since I'm using a Noctua NH-D15. :) As for those super long boot times on AM5, it's because your system does RAM training every time you power it on. There is an option in the BIOS to turn that off and keep the latest stable numbers. That's what I did & my boot times decreased dramatically, too.
@@PaulXPZ I never use sleep mode. In fact, I very rarely turn my PC off at all! Sometimes it can run weeks on end, and I just turn off my monitor when I go to sleep. But yes, when I need to shut it down, I do a full shut down. :)
I went from Intel to AMD when transitioning from a 9900K to newer hardware a few years ago when the 5950X was relatively new, didn't regret it then & seeing what's happened with the 13th & 14th Gen, and now how disappointing it looks like Arrow Lake will be, quite happy to keep my 7800X3D for a while longer, although I may end up getting a 9950X3D when it comes out as I'm doing workloads now that can actually use extra cores if I get them.
if you are actually doing work then you might want the current cpu's from amd. Yes, x3d's are the best to games but everywhere else they fall pretty short.
@@RealSugam I use the system for both, which is why a 9950X3D is the plan when they come out. Looking forward to it even though the 9000-series is kinda ass, I like the power efficiency since my system is already a power-sink regardless with all the crap I have in it (literally draws 150W from the wall at idle because of fans etc).
living with an 6800k that has started to show signs of being on the way out this is interesting for me, wheather i switch to amd or stick with intel is on the line with this one, great video Jay & team!
I love how they added geomean to this launch. Just makes me throw everything claimed in the trash. Why invent new standards in benchmark just to help manipulate the meaning and numbers so harshly. Just stop with the bullshit already Intel. Like Jay said at the end, smart will wait for for X3D or extended reviews at least 6-12 months before jumping in. Sadly people still d ride for Intel just madness.
They did that with the 7k x3d launch. They got tens of thousand of gamers to jump on the almost double the cost 7950x3d by making sure it was release a month earlier and they knew that the parking/affinity feature wasn't really working for them yet. No company is a "good" guys. it's the degrees of what they can get away with. AMD makes good stuff, but they still pull the tricks when ever they think they can get away with it. They just haven't have as much chance to do it compared to Intel/Nvidia. Luckily for them ever time they do pull something that on it's own in a vacuum would have people really pissed off at them, Nvidia or Intel come along with something else that takes the heat off.
They totally blew the zen5 launch, made enemies with the tech reviewers. And lied with their performance gains. If they aren't already the bad actors, they certainly trying to milk every last cent from you. When they can't they drop the price a week later.
Question: do we think we might start seeing diminishing return in terms of performance as generations release? Like not saying we will see zero increases but smaller bumps per generation?
Yup. We are getting close to the point that the "transistors" cannot be made any smaller due to physics. There is some promising research into non silica based chips, but no clue if these will pan out. Photonics are currently much much to large to be used for anything but machine to machine communication.
Im running the efficiency train. Same power for less wattage is great thing to see. Over years we have seen the average wattage go up and up .. i would really enjoy gpus getting less power hungry now.. 3 slot cooling - seriously that hilarious
until we get an ai like say you see in star trek where you can be like computer do this and it's like as you command sir ai is never going to be a big deal to most people at all on pc until then
Intel has always made 2-gen sockets since Lynnfield/Bloomfield in 2008, excepting LGA 1700, which was 12/13/14. AMD has released short-lived sockets like TRX40 and sTR4.
I'm actually really glad to see a focus on power efficiency. It takes concern away from the power budget for a power supply and graphics card and especially cooling methods. On one hand, I wouldn't be surprised if they kept on their path of more performance and more power draw, but there's no way that would've been sustainable, so on the other hand, I can completely understand how this makes sense and hopefully be their best step moving forward.
I have an Intel Core i7 4790k with a GTX1070. I was thinking about getting the Intel Core 9 Ultra 285k. But I will wait for AMD to see if the Ryzen 9 9800X3D beats out the Ultra 285k in gaming. What's funny is that the 7800X3D is the king so far still..
Wait for gaming benchmarks before you upgrade the CPU. This is just my opinion so take it or leave it but if I was you I would upgrade that CPU to the best gaming CPU that's at the top once intels newest chip is reviewed. I would then buy a used (find good deal) 3090, or 4090 once the 50 series is released as they will populate the market as the enthusiasts buy the 50 series. I would do this so your CPU and GPU upgrade path is staggered. the new CPU should easily last 5-6 years and the GPU would be upgraded in 2-3years which at that time you go back to buying the newest GPU generation at that time. If you have the cash do it all at once every time but I find this works great for me and budgeting my spending. I also have three sons that game and so I am also buying and building their 3 gaming PC parts like this as well.
Jay was wrong, Zen 1 wasn't a chiplet design, and they aren't on their 5th generation of chiplet design. Zen 1 was monolithic. Chiplet design came with Zen 2.
@@terribleatgames-rippedoff I should watch the ignoramuses who whine when a $60 motherboard throttles with an $800 CPU running an artificial power virus in an artificial zero-airflow VRM situation? Or I should watch the guy who calls anything that isn't the peak GaMeR value (while spouting off about dungeons and dragons) a "waste of sand"? The 3600/5600? Perfect. The 3700X? Trash. 3900X? Trash (because GaMiNG and multiple dies) 3950X? Trash. Ditto all of the 5000 series parts.
He's double wrong. Not only was zen1/+ a single chip, it was not largely impacted by the ccd "problem" which he referenced. Zen fixed their competitiveness to intel by getting better clock speeds and better memory performance (alongside bigger cache amounts), but even after this happened the 3300x which had no ccd communication couldn't match the performance of a 3600x and both performed on the level of a 7700k in 4 core core workloads.
Yes for sure! I might upgrade to 9800X3D but we will see, I am currently using the 5800X3D so ;) Might be perfect to actually just skip 2 gen and go for the Zen 6!
I'm in the process of a new build from my i5 9400 cpu. Looking at the last few gens, this information, and Intel's clear pivot to workloads, efficency, and AI, I can't think of any reason I'd build another Intel rig, rather than switching when the 9800x3d comes out. Aside from possibly trying to build a cheaper 12th gen Intel setup and see what comes up in a couple years if the 9800x3d is somehow a disaster. So as someone who premonantly games, AMD just makes more sense to me right now.
@jamescarter8311 I got an RMA done. But here is the kicker from a 13900kf kf. They gave me a 14700kf and told me that's the best they can do. So I lose performance and money.
@@jamescarter8311 That doesn't help if you have to replace the CPU every year and what happens after 5 years? Intel should have done a recall, something is fundamentally wrong with 13th and 14th gen
Hi Jay, hope your health is improving and your family is doing well. I always love the content you put out here. Really informative and non-bias. By far one of loved tech channel. Looking forward towards more. And thanks.
@@jamescarter8311 it's true that AMD has a history of underperforming, but so does Intel, so it's a pointless assertion. The crown goes back and forth between AMD and Intel, and has done so for the last 25 years.
i had a 13900k that is now in heaven and switched to 7950x after the win11 fixes my actual gaming perf is the same or close, those few fps intel had caused obligatory degradation I do not trust intel at all since the big problem is the ringbus design
I'm holding on to my 12700k for a while. I still feel like it has another 3 to 5 generations of good use. But it is interesting to see what improvements intel and amd is doing for their products.
Yeah I don't see a reason to upgrade my 13600K either yet. I am interested if the Ryzen 9 X3D chips turn out well tho, especially if there is no more core parking nonsense.
i was running a 9900k right up until a month or so ago and until 2023/24 id have hardly known my CPU was 7/8 years old. you'll be good until 2026 at least.
Im gonna wait. I just spent weeks looking at your videos and learning how to build a mid tower PC. Found out about Micro Center cause of you, and also learned they're only 10 minutes from where I live. Ended up getting a I7-14700k/motherboard/48GB Ram bundle for $510. Then got an RTX 3080 new on ebay for $445. This will definitely hold me off until the new CPU and RTX 5090 come out. Gives them time to iron out the issues before I commit to my monster PC build I wanna really make.
You work for 40yrs to have $1M in your retirement, meanwhile some people are putting just $10k in a meme coin from just a few months ago and now they are mutimillionaires.
surprised you didn't mention about what's replacing hyperthreading , not sure how they are going to market that as showing no hyperthreading is going to turn some people off not knowing it's getting newer method
At least Intel is supposedly hitting same performance at half the power, their main improvement been efficiency. I can’t say the same for AM5😅 seems like the same old AM4 on a newer node
@@slimjimjimslim5923 is that a joke? comparing am4 chips to am5 saying theres no difference is crazy. You might mean zen, and even then, zen didnt need a 50% drop in efficiency, it was already operating at like half the TDP of intel and that gap is still there, zen is still more efficient. The ultra 5 245kf has basically the same TDP as the 9900x at about 120w) with max socket power of about 160w. Thats a huge difference in performance for the same power draw.
I thought the socket was the same, according to GN. What changes is the standard for the amount of pressure needed to seat CPU on the board, which most good boards should meet anyway.
more disappointment... and they couldn't even bump up the L3 cache huh? Is it really so much to want something like a 12 or 16 physical core CPU with big cache, without having to rely on stupid core parking? Gimme like 16 P-cores from Intel, with the X3D cache from AMD, on a single chip (no CCD parking nonsense) and no hyper threading.
@@Valthalin We'll see, the way I read it is that there will be 2 and/or 3 CCD chips, they're putting X3D on at least two CCDs. Until the performance is actually tested and latency is tested, I'll be holding my breath. From a pure single-thread compute standpoint IIRC Intel is still ahead by 15% IPC, and in many many games, especially the ones I'm concerned about, the limit is the render thread. in VR you have realistically two total render threads at best, and frame pacing is more important than frame rate, and any judder causes absolute mayhem. in non-VR you can run many games at like 300fps no problem, then throw decent levels of graphics, and all of a sudden 72 fps becomes "impossible".
Sooo I'm still sitting on a 3570K, is it a good time to upgrade yet? 7800x3d or wait for 9800x3d? The benchmarks i've seen of Intel are running way to hot for my taste. I'm used to 50-60°C on my 3570K @4.4GHz / 83W.
I turned HT off on my 14600KF and was able to turn it up with 200mhz extra on pcores. I run it now on 5.8ghz p-core and 4.4ghz ecore. Got a 2244 single core score on cinabench R23. And a 922 score on cpu-z bench test. It runs stable on the prime95 and cinabench stability tests.
IF I could upgrade before the end of the year...? I'd get a 7800X3D; I only play WoW so even my 5800X is probably already overkill, but I to play 4K, high FPS - I still can't find a definitive answer to why I need to pair either CPU with a 40/5090 to achieve it, so I'm holding fire 'til I do see why...
No reason to go crazy if it is just WoW you are playing. The top end tech is entirely overkill for the game. Running anything at 120fps on a 4k monitor will demand good hardware on the CPU and GPU side if that is what you want to achieve. I'd stick with your current cpu and try something around a 4070 ti super at 1440p, maybe look for a test video up on youtube and check out the graphics and the card(s) needed to get the performance you want.
@@Valthalin I can already do/get 1440p @ 100+fps (RX6800) & have had that for a good few years - I just feel it's time, for me, to want to play @ 4K with the bonus that other stuff I do/watch is available in 4K, as well.
Numbers are looking Mid , no upgrade path with Arrowlake , throwing down 1k for no to a little performance is crazy , next gen X3D chips are going to destroy
@@tim3172 i guess you miss the part where intel was asked about Socket Longevity and dodged the question, there is literally several rumors about Arrow Lake + Both Unboxed and Gamers Nexus asked the questions and got cold shouldered, there is tons of rumors about intel canceling there ++ and jumping to Lava lake in 2026 which is a whole new socket, its called google go look yourself there is plenty of youtubers talking about it.
Yup. E-cores were never much more than marketing. It's how Intel kept up with AMD's core counts. Of course plenty of wimdits buy based on numbers on paper, so it worked for Intel.
You know why that is? Power consumption. Increasing the P cores and decreasing the E cores might give you a miniscule improvement in gaming, but the powerdraw especially in all core workload increases. Personally, I think 8 P cores are fine for now, very few games (maybe you could name one?) profit from more than 8 P cores.
They need to bring back those old cases with the turbo button, except now they toggle 65/105W power.
its built into the cpu.
@@BlueRice but then I can't press a zoom zoom button so my brain makes a happy
@@BlueRiceTell me you didn’t use a computer in the 80/90s without telling you didn’t use a computers in the 80s and 90s😂
Man, flashback from the Intel Pentium days. Case had a turbo button. 166MHz with the turbo on, 133 otherwise.
@@Gren83 Right, the button made the CPU slower. These kids today.
The person who invented that new naming scheme has probably previously worked for a monitor manufacturer.
All hail Iiyama!
🤣
marketing team is doing fentanyl version 2.0
Laptop manufacturer:
"Hold my 16IRX8-82WK00GEMX"
The people who compare that new naming scheme to monitor naming schemes has never looked at monitor naming schemes.
Everybody: listening to Jay
Me: Obsessively watching the PC behind him to see which fans turn on and off.
OCD by any chance??? 😄
Same here 😂😂
I always set my GPU fan speed to the minimum that they won't turn off in MSI Afterburner, the noise when they turn on and off is so distracting, and probably could shorten the lifespan of the fan? I don't know.
Same
lol now i’m watching them!
4:21 Some caveats to this, because in some cases you actually get a performance increase when disabling hyperthreading because the core is not split between running two tasks
I wonder how it'll compare to 14th gen with HT disabled
TBH power draw was kinda getting out of hand and is still out of hand with the GPUs, it is nice to see it getting better, we shouldn't need a dedicated circuit to run our computers in our houses.
We've even gone beyond Bulldozer/Piledriver. Crazy.
yeah if you running intel cpu 350 watts , but Amd had 80 watt 7800x3d cpu for long time , looks like you just buy intel only so you might need powerplant in the house
UNLIMITED POWER
@@parm2-x7h i have a 5800x, I was speaking about the industry in general not my own setup ;)
@@parm2-x7h I'm running a 5800x, i was speaking about the industry in general ;)
Intel ditched HT due to 3 reasons.
1) Sidechannel attacks.
2) HT requires doubling the register file among other things - this saves die space.
3) Intel was planning cores that can be partitioned in-situ as part of the Royal Core architecture. Ripping out HT is a first step towards that. It's unclear whether Royal Core will ever come to market as planned though, as the project wasn't completed before Jim Keller left.
And probably they'll do "raduced x86 instruction set" as main instruction set to compete with ARM
@@BozesanVlad X86S lies somewhere in the future. Probably well past 2034. Also, the instruction set size of x86 isn't really that much of a problem.
Modern instructions hit a dedicated fast path and the old stuff remains supported through microcode mappings, but isn't emitted often by compilers anymore anyways.
CISC applications tend to be smaller and require less I$ and I$ bandwidth btw., because loads and stores are most often an implicit part of other instructions instead of having to be explicitly added by the compiler.
At the same time, loads/stores can still be optimized, i.e. performed out of order and/or batched, once the instruction has been broken down into microOPs inside the CPU.
@@Psychx_ In short, they made them GPUs as FPGA, and will do them CPUs the same.
Not in 2034, but a few years ago (GPU). Isn't even something they hided, is in them drivers and on the intel site.
I like the direction they are going with the efficiency, not only because it saves power, but because that is a passive way of battling heat. Which in a way directly translates to less throttling and more stability and durability. If we have learned anything from the 2 previews Intel gens, going all the way in power for sake of performance is not that wise.
Lol, so if you are a gamer, you are paying alot more for the new intel cpus while getting less in performance?. What a joke. The 9800x3d is the only way to go. I couldn't care less about productivity benchmarks.
Hmm.. I like durability as an argument for a new CPU generation. Wish my Intel Q6600 from 2007 that i still use to play old games had that!.. Are you for real?
@@RawBejkon I am for real, in fact I wish GPUs would adopt the same direction. Power draw is getting insane. a 1500 watt PSU? No thanks.
Yes and no. Yes that Intel's previous generations of CPU are using too much power and producing too much heat that it becomes a problem, so yes they needed to lower the power consumption. But at the same time no, its a fail because they had a node advantage over AMD and this is the best they can come up with? Heck, they probably would have similar results if they made 14th Gen CPUs on the new node to reduce power consumption while maintaining the same performance. Efficiency is great if there is performance increase. This is the same problem with AMD, going for efficiency for little to no gains with Zen5. That's why people complained and AMD released a 105W option.
The problem with Intel is that it is using a newer node, had the advantage and still couldn't win. In fact, it is still going to consume more power than AMD counterparts. Not to mention X3D chips will retain the lead, and 9800X3D will pull further ahead when it comes out. Intel needed efficiency AND performance improvement in order to compete. Unfortunately it couldn't achieve both.
insane excuses...
13:20 - "But we can't 'get' anywhere in the future, if we don't 'start' somewhere. So if we don't start 'here', then we'll never get to 'there'." - Jay 2024
Just hope new X3Ds dont dissapoint.
X3D never disappoints.
As long as you don't expect a huge uplift, it'll be fine.
@@cizzymac seems there is large internal change in 9000 for l3 cache, we soon see what amd is cooking there 😉
depends if microsoft dont break the task handler agin to suit intel cpu's like they did with intels ecore bollocks
@cizzymac ? How ? Zen 4 to zen 4 3d is 20% faster in gaming while having 10% lower clocks
Thank you for pre mashing my keyboard for me. Power consumption helps with reducing the ridiculous cooler sizes, help noise and form factor. i.e. It helps make ITX viable because it makes low profile coolers more viable. I like these developments from both Intel and AMD
Small form factor is fun, and to really make something small, the cooler is never optimal. Power supplies are also lower capacity. In that world that often involves undervolting and heat management, these new improvements are huge.
not to mention portability and how much it will help battery life on laptops!! before they had to have there own "m" series but some laptops still use desktop processers but suffer greatly from power draw in bat life and heat
Exactly this I'm wanting to move over to itx at some stage purely because what's the point in large towers anymore? We no longer have SLI and we have NVME drives and I hate how much desk space my PC takes up 🤣
Yea in the SFF world this could equate to running a cpu and gpu on one 280 mm radiator instead requiring separate ones.
I have a 8700k system that's about 6 years old at this point. Which still works just fine for productivity stuff. I've been really interested to see how Arrow Lake stacks up, but maybe I'll wait for the second generation of these chips.
Same CPU. I was thinking of upgrading due to the new FLIGHT SIM releasing and a new cpu would be paired nicely with my 3080. Might just overlock it slightly and wait for the next gen
my 8700k is a crap hunk of shit for playing space marine 2 i want upgrade
Same CPU with a 1080ti but I'm upgrading this time around. Flightsim is my main game and I need this upgrade quite bad lol.
@@simboodamnI’m also upgrading this time. What are you planning on buying?
@@mrskizzlewizzle I might be judged for that for i'm going for the intel Core Ultra 265k, 64gb of ram, a few ssd's gen4 of 2tb each and a RTX 4080 Super. I can live with the risk of the new platform.
The problem I have is that they cancelled the refresh which would be released next, so this is a dead platform, along with a dead socket
Why were the fans spining on the GPU behind Jay? 1:37
Pc is turned on (possibly rendering something) and it's probably configured to have fans completely off until a certain temp threshold
I want to point out on the multi-threading graph at 6:30 that they mention this was tested at 125W approximately. So the 14900k or 9950X may perform better at higher wattages. I am happy they are improving the performance per watt, but I would be surprised if the Ultra 9 285k took the multi-threading crown.
The fact that the z790 is obsolete with the arrow lake cpus is crazy
dirty deeds
Yeah but given it’s Intel they don’t really have a history of long socket support
@@sjargo11 one of the many reasons to go with AMD. This will be obsolete in 2 years lol.
I doubt that the NPU will do much. The limiting factor is memory bandwidth, similar to how graphics are always better on a dedicated gpu with high-bandwidth gddr6. What the NPU could do is reduce the power profile of low intensity neural processing, i.e. increase efficiency. But because neural processing isn't already power limited, it won't be much.
I don't know hardly anything about NPUs. But, its only 13 TOPS and I do know that isn't much.
Should've ditched e-cores, or at least make an 8 or 10 P core only gamer edition, maybe with some extra cache ?
If they were smart, they would have a chip like that and market it like AMD does their X3D chips.
I can see e-cores on mobile chips. On desktop, no
@HyperionZero Cores can always clock down to meet lower power requirements. But they could still make another SKU that's just for desktop high performance needs.
@HyperionZeroHaving a model with only P cores for those who want that wouldn't hurt those who liie E cores, just dotn buy that model
@HyperionZero Of course CPUs are not only for gamers. However, it might be a good idea to develop more specialized CPUs for different tasks beside an allrounder CPU.
13:53, What happens to performance when you go above 1080P? Most people are now going to 1440P cause its not any more than a 1080 monitor now. I'm guessing the performance is much worse and that is why they only show 1080.
the only way any market share will shift is IF INTEL DITCHES THE DUMBASSERY THAT IS CHANGING SOCKET EVERY 2-3 GENERATIONS SO WE ARE FORCED TO BUY NEW MOTHERBOARDS AS WELL. My 12900KS is the last Intel CPU I will be using, buying, or even thinking of purchasing in the future.
Very few people care about that
@nostrum6410 quite a few people care about it, educate yourself.
@@Gimpy17 you've still have 12th, so you don't seem to care either
If you really have a 12900KS then you do not need to upgrade your PC in about a decade. CPU's last a very, very long time. Most people complaining about that fact either do not have a computer themselves or are extremely wealthy and stupid with their money.
I get what you're saying, but how many people bought a CPUs each release of am4? If they did 2-3 then what's the difference?
Happy with my 13900K and Z690. I think it will serve me well for 4 or 5 more years. However, I will definitely jump from 4090 to 5090 when it comes out.
AC DC "dirty deeds done dirt cheap",a classic song.
Gaming comparison vs. the 9950X (6:48)?
Content creation / Desktop performance comparison vs. the 7950X3D (7:12)?
It's clear Intel has little to offer this time.
On the other hand: This seems to be Intel's own Zen1-moment and we can look forward to what the future will bring.
The issue with 13/14th gen, the brand new socket neeeded with potentially short lifetime as intel announced skipping a gen to focus on a completely different sku and no real gen to gen gain ? I'm currenlty slowly saving to get a desktop in 6 months as I'm finally settling somewhere and I'm definitely leaning AM5
Thanks for the video :)
nice to see intel has come around to gluing their cpu's as well.
If its like Zen.
Gen 2 will be awesome! Skip the growing pains. Wait a year.
I like to see both AMD and Intel going the route of forgoing some performance for efficiency. We're at a point of diminishing returns with frame rates on modern hardware especially with DLSS, FSR, and XeSS. I mean 75fps in an RPG is pretty damn good and smooth and so many competitive shooters are so well optimized old hardware can get 144+fps on them. The returns for efficiency are much higher than the playable experience in my opinion. Pair that with the 5090 claiming to draw up to 600W, it makes perfect sense. A computer drawing 900W between the CPU and GPU is just insane and completely unnecessary when you consider the point of diminishing returns.
I did a 7800X3D build in August, I'm good for a few years
so lucky lol it's out of stock everywhere where I live, and the few stores that have it, are overpricing it
Said this when the whole 13th/14th gen debacle broke. We're finally starting to see the limits of silicon as a processor medium IMO.
more a limit of a certain methodology of processors in the x86 architecture. chiplet design will bring benefits that the previous methods didn't allow but this iteration is going to make it seem like Intel were taking steps backward too
I remember this being bleated out for what? 4 years during Pentium 4?
5+ years of Core i?
3d cache is coming to Intel right? That's going to open the floodgates for gaming performance. Zen5 has some latency issue in their design.
@@tim3172 Exactly. And at some point it will become a reality. You cannot deny clock speeds have practically stalled and there's only so many ways around that.
@@tim3172i think the argument holds more weight this go around as it isn't just cpus. gpu generations have significantly increased power consumption to maintain their generational improvements
The thing about Hyperthreading/Simultaneous Multithreading, the standard 2 thread CPU core has at least 4 pipelines (4 processing pipes). One pipeline is the primary thread. One pipeline is a secondary thread. Two pipelines are hardware acceleration ASIC threads that are loaded operations from the CPU firmware, for example the code for SSE instructions run on these extra pipelines. All of these pipelines share the ALUs and FPU(s) inside the core. In order to run 4 of these threads simultaneously, there needs to be enough execution units in the core to serve these pipelines. Ripping out hyperthreading allows the CPU developer to shrink/simplify the core to just 1 threaded pipeline and at least 1 hardware acceleration pipeline. This reduces the number of ALUs in each core, as you have fewer "customers per core" to serve.
The next factor to drive Intel to drop Hyperthreading, or at least kill their simultaneous multithreading method, is all the instruction execution security flaws and potentially looking at caching and memory access improvements.
If there are cons to Hyperthreading/SMT, why did CPU engineers incorporate it into the CPU design? The purpose of multiple threads simultaneously running per core is about performance per square millimeter of silicon. Often on a CPU, a single instruction or operation can only utilize 1 execution unit in the CPU in a clock cycle, leaving the rest of that silicon unused. Each CPU core has more than just ALUs and FPUs because there are other non-logic operations to call. Because each core can only execute 1 instruction at a time, there are unused resources. Thus the purpose of SMT/HT is merging 2 CPU cores to share underused resources and execution cores, reducing the landscape of having 2 single thread cores into 1 2-thread core.
The real problem of SMT/HT is something more specific to the x86 series of CPUs, backwards compatibility. In order to maintain backwards compatibility so the next generation of CPUs can run last year's operating systems and games, the same core instruction set and syntax must be maintained. The original 8086 CPU was designed to directly access RAM and didn't consider RISC pipeline staging or multithreading. The modern CPU has to translate the old system design onto newer, more sophisticated structures, and that translation step is overhead.
I'm running a 10700k with a 3080ti - after all of the headache I'm seeing with the 13/14gen Intel processors, I'm heavily being pushed back into the AMD camp. The 7800x3D seems like a very reasonable upgrade. (Trying to keep things on the somewhat cheaper side while still getting the best performance I can)
If your waiting for a possible cheaper option I think the 9600x3d could be a potential cheaper alternative to the 7800x3d, assuming when it comes out that it’s available to the general audience
The new core series is a new architecture and does not suffer from the problems of 13/14
Intel changes sockets too much. Not upgrading any time soon for sure.
It's a pain. I'm waiting for 9950x3d. I put a new socket in for 7900x. Hopefully it is OK.
It seems both Intel and AMD reached a point where performance can’t go any higher at the point so they’re competing to see who can come up with worse naming schemes, Intel seemed to be in the lead with the whole core ultra thing but AMD is actually pulling ahead cause they went with 300 series for no reason other than its a bigger number that Intel’s 200 series
The biggest concern I have with the new CPU's is the inclusion of the NPU's. When I bought my new laptop I purposely stayed away from any of the units with the NPU's due to Microsoft's ridiculous implementation of Recall and the HUGE security concerns that come with it and the fact that you cannot turn off Recall.
Yep Npu will will be used to profile users.
Recall is off by default tfym you can't turn it off?
I'm curious to see what the performance of Intel's new cpu's will be like. But I'll still end sticking with a 9800X3D, I mainly game and watch YT so an X3D chip is the best choice.
Sigh. Jay I'm right with you there. I went with the AC/DC song. We've gotten old haven't we? I hope you're doing well.
Lol
Intel: "yeah but they just glued cores together".....
Intel: We glued cores together.
Intel: We glued tiles together. Tiles is new sexy thing!
Tiles have a greater density of interconnects so they have a tiny slice of bragging rights - but as you suggest, certainly not enough to excuse their past criticism
Intel: We glued Chiplets together and called them Tiles. Now we gonna' glue Tiles together. [Or something.]
I hate this way of creating multi core systems. It’s just bad. I don’t care if it’s easier to manufacture, it’s just bad
AMD: We created FX! It's 1 core per thread.
Intel: Haha! We can do multithreading and we're faster!
AMD: We have Zen and it's multithreading capable with 2 threads per core.
Intel: We took FX and cloned it...
AMD: Um... 😳 WTF!?!
can I use windows 10 with new 15th gen?
I need more power reduction for more itx cases.
I'm willing to upgrade and my 5th gen intel cpu has come to an end. But i need to see some performances test before i decide which one i choose.
Since RGB gives you at least 30% performance boost I wonder if they will now make AI controlled RGB :D
dont give them ideas
If the performance per watt is as they say, I think that's a v.positive step for Intel. It could be a nice upgrade for people on 13th or 14th gen, and a GREAT upgrade for people with far older CPUs.
you pay like 200$ extra for like 2 fps difference its like paying 100$'s for 1fps avg difference if you have like a year old whatever cpu these cpu's are not it imo but since i got a i76700 which i bought like in 2015 its about time for me to upgrade lol ill just go for the cheap midrange one and not pay 2/300 bucks extra for a 1/2 fps difference lol
I'm still happy with the performance of my AMD Ryzen 9 5950X CPU.
Banger CPU
Same even so I would only go to 7800x3d if I changed
Little reason not to be, honestly.
I managed to get mine working on x370 mobo. I'll keep this system forever
I am beginning to think unless if some major breakthrough occurs, we are reaching the limits with current technology. The days of large improvement jumps are behind us.
How does it compare to 14900ks though,im not sure if i wanna switch if the only difference is maybe 5 to 10% performance and a bit cooler/less power (cooling and power isnt really an issue for me ,hottest cpu gets for the type of games i play is 70 to 80f,hell some of the games i play it doesnt go above 58f). Thinking of waiting till the next generation after this.
I just changed my 13900k with a new 14900ks, I will wait another year to change the whole platform.
I mean when you wanted the best of the best before HEDT died with the 980X, 3960X, 4960X 5960X etc. you were paying usually around the 1K mark. Sure they were more equipped as well when it comes to PCI-E lanes and triple or quad-channel memory but they didn't by far have as many physical cores as we have now.
I believe they came from the same production lines as the xeons though so I mean some of the cost is explainable, but I think the current pricings isn't exactly super terrible.
Don't forget that there's inflation and raised taxes on luxury products in some countries, the i7 950 also roughly costed about 260 to 280 euros back in 2010. Fortunately motherboards weren't crazily priced even though they were very well equipped with a rich featureset we only see back on expensive threadripper/xeon platforms now. I miss the old motherboard times.
Waiting and still running an i5 750. Definitely interested still
I’m cautiously excited for the new Intel stuff. I’ve been an AMD guy for a long time, but this definitely smells like their Zen moment. As much as I like AMD, I also like competition so neither company gets complacent.
at release date zen 1 had a lot of problems with rams, etc tho. I wouldn't recommend to be early adopter of this new intel zen moment
...done dirt cheap.
The song is a classic Jay. It doesn't date you
I'm gonna wait as only recently moved to ryzen 7700 cpu so no upgrades for next few years
I got that same 7700 and it’s a beast for rendering video and gaming.
I'm on a 6600k and all i'm hearing is let someone else test this one out come back in 2 generations. If I buy a new CPU its still more worthwhile to buy AMD cpu or and 12th gen Intel and wait another 10 years before going down this route. (I'd still use the 6600K but win 11 kicking me off my PC.)
Thanks for presenting your thoughts 🙏
Keeping my 7800X3D for a long time. :)
Shit me too😂 i undervolted -24 and it runs faster and idk why but idk if its just Am5 but the boot up times are hella slow but i got it from 45 sec to 14.5 sec
@@Luckytmj97 Mine is fully stock, but the temps are more than fine for me since I'm using a Noctua NH-D15. :)
As for those super long boot times on AM5, it's because your system does RAM training every time you power it on. There is an option in the BIOS to turn that off and keep the latest stable numbers.
That's what I did & my boot times decreased dramatically, too.
@@Yuu_Touko I thought everyone just put their pc to sleep? you guys doing full boot every time?
@@PaulXPZ I never use sleep mode. In fact, I very rarely turn my PC off at all! Sometimes it can run weeks on end, and I just turn off my monitor when I go to sleep. But yes, when I need to shut it down, I do a full shut down. :)
@@Yuu_Touko My record is over 400 days of power on time on Task manager 😂
11:30 whats going on with the computer in the background. the fans are being WEIRD
I went from Intel to AMD when transitioning from a 9900K to newer hardware a few years ago when the 5950X was relatively new, didn't regret it then & seeing what's happened with the 13th & 14th Gen, and now how disappointing it looks like Arrow Lake will be, quite happy to keep my 7800X3D for a while longer, although I may end up getting a 9950X3D when it comes out as I'm doing workloads now that can actually use extra cores if I get them.
if you are actually doing work then you might want the current cpu's from amd. Yes, x3d's are the best to games but everywhere else they fall pretty short.
@@RealSugam I use the system for both, which is why a 9950X3D is the plan when they come out. Looking forward to it even though the 9000-series is kinda ass, I like the power efficiency since my system is already a power-sink regardless with all the crap I have in it (literally draws 150W from the wall at idle because of fans etc).
living with an 6800k that has started to show signs of being on the way out this is interesting for me, wheather i switch to amd or stick with intel is on the line with this one, great video Jay & team!
I love how they added geomean to this launch. Just makes me throw everything claimed in the trash. Why invent new standards in benchmark just to help manipulate the meaning and numbers so harshly. Just stop with the bullshit already Intel.
Like Jay said at the end, smart will wait for for X3D or extended reviews at least 6-12 months before jumping in. Sadly people still d ride for Intel just madness.
I do like their placement of the breathing hole; it's not sitting on top where it can get clogged up by a nice spread of thermal paste.
I would say power reduction in Intels case is a good thing considering 14900k uses the power of a small sun
in the palm of your hand
Good insight. Looking forward to the real benchmarks. I watched an intel video that went into how excited they where for the overclocking potential.
And this Arrow Lake, what lithography it is, 10nm still?
I just went AMD, I'm waiting for them to become villains before I turn again
They did that with the 7k x3d launch. They got tens of thousand of gamers to jump on the almost double the cost 7950x3d by making sure it was release a month earlier and they knew that the parking/affinity feature wasn't really working for them yet. No company is a "good" guys. it's the degrees of what they can get away with. AMD makes good stuff, but they still pull the tricks when ever they think they can get away with it. They just haven't have as much chance to do it compared to Intel/Nvidia. Luckily for them ever time they do pull something that on it's own in a vacuum would have people really pissed off at them, Nvidia or Intel come along with something else that takes the heat off.
They totally blew the zen5 launch, made enemies with the tech reviewers. And lied with their performance gains. If they aren't already the bad actors, they certainly trying to milk every last cent from you. When they can't they drop the price a week later.
Calm down guys, obvious joke was meant to be obvious, not triggering
Hahaha. Maybe that's already happened when they decided they could follow nVidia into the ultra-high price GPU segment.
@@a5centthey literally announced they're not trying the high end anymore. their top of the line is 1000 while nvidia is literally double. come on now
Question: do we think we might start seeing diminishing return in terms of performance as generations release?
Like not saying we will see zero increases but smaller bumps per generation?
Yup. We are getting close to the point that the "transistors" cannot be made any smaller due to physics.
There is some promising research into non silica based chips, but no clue if these will pan out.
Photonics are currently much much to large to be used for anything but machine to machine communication.
Im running the efficiency train. Same power for less wattage is great thing to see. Over years we have seen the average wattage go up and up .. i would really enjoy gpus getting less power hungry now.. 3 slot cooling - seriously that hilarious
until we get an ai like say you see in star trek where you can be like computer do this and it's like as you command sir
ai is never going to be a big deal to most people at all on pc until then
It's moved to 4 slot cooling for the 4090.
Does the reduction in power give us more overclocking headroom?
Its gonna be another 1 gen socket. At least with amd's zen I went from 1700 to 2700x to 3900x to 5900x and finally the 5800x3d all on one motherboard.
It is so smart from AMD. The step to upgrading your Cpu only is so much easier than having to change your mobo aswell.
@@legendtoni1094 absolutely, makes me wish i went AMD sooner then this year.
@@Enzo187 ye no way intel would have allowed generational leaps on the same Mobo until the 5800x3d. That kind of leap is unheared of what AMD did.
Intel has always made 2-gen sockets since Lynnfield/Bloomfield in 2008, excepting LGA 1700, which was 12/13/14.
AMD has released short-lived sockets like TRX40 and sTR4.
@@tim3172 OP sorta misspoke with "1 gen socket" theyre referring to how with ryzen you can go from a 1700 to a 5700x and not need a new mb.
Spot on Jay. Thanks again for the reviews btw.
I'm actually really glad to see a focus on power efficiency. It takes concern away from the power budget for a power supply and graphics card and especially cooling methods. On one hand, I wouldn't be surprised if they kept on their path of more performance and more power draw, but there's no way that would've been sustainable, so on the other hand, I can completely understand how this makes sense and hopefully be their best step moving forward.
Im definitely keeping my 14900kf for a while until I see some huge improvements to performance and power.
I have an Intel Core i7 4790k with a GTX1070. I was thinking about getting the Intel Core 9 Ultra 285k. But I will wait for AMD to see if the Ryzen 9 9800X3D beats out the Ultra 285k in gaming. What's funny is that the 7800X3D is the king so far still..
For only gaming price to performance dollar is important other than that in contain creation you can buy anything as you are getting money back😅
7800x3d sure beats it ,so 9x3d will even more.
Fps will not matter, it is only 5 to 10% difference.
7800x3d.. buy it for $600, not worth it.
Productivity is still Intel is the king.
Wait for gaming benchmarks before you upgrade the CPU. This is just my opinion so take it or leave it but if I was you I would upgrade that CPU to the best gaming CPU that's at the top once intels newest chip is reviewed. I would then buy a used (find good deal) 3090, or 4090 once the 50 series is released as they will populate the market as the enthusiasts buy the 50 series. I would do this so your CPU and GPU upgrade path is staggered. the new CPU should easily last 5-6 years and the GPU would be upgraded in 2-3years which at that time you go back to buying the newest GPU generation at that time. If you have the cash do it all at once every time but I find this works great for me and budgeting my spending. I also have three sons that game and so I am also buying and building their 3 gaming PC parts like this as well.
@GangstalkingClips just buy good gpu like 4090 and it is much better. If you have 3090.
Graphic matters if you don't have decent gpu.
you aged us all with the ac/dc reference, Jay.🤘😠 🤘
Jay was wrong, Zen 1 wasn't a chiplet design, and they aren't on their 5th generation of chiplet design. Zen 1 was monolithic. Chiplet design came with Zen 2.
That doesn't matter for Jay's target audience. If this is a problem for you, then HU and GN is what you should watch instead.
@@terribleatgames-rippedoff I follow those guys too.
@@terribleatgames-rippedoff I should watch the ignoramuses who whine when a $60 motherboard throttles with an $800 CPU running an artificial power virus in an artificial zero-airflow VRM situation?
Or I should watch the guy who calls anything that isn't the peak GaMeR value (while spouting off about dungeons and dragons) a "waste of sand"?
The 3600/5600? Perfect.
The 3700X? Trash.
3900X? Trash (because GaMiNG and multiple dies)
3950X? Trash.
Ditto all of the 5000 series parts.
He's double wrong. Not only was zen1/+ a single chip, it was not largely impacted by the ccd "problem" which he referenced. Zen fixed their competitiveness to intel by getting better clock speeds and better memory performance (alongside bigger cache amounts), but even after this happened the 3300x which had no ccd communication couldn't match the performance of a 3600x and both performed on the level of a 7700k in 4 core core workloads.
@@s1mph0ny Zen 1/+/2 had CCX scheduling issues. As an 8 core chip was made of 2 CCXs of 4 cores.
so should i disable hyper threading on an older gen intel lets say 11th or 12th gen
I'm so glad I bought 7800X3D back in July for $380.
Both AMD and Intel did not disappoint to disappoint
Yes for sure! I might upgrade to 9800X3D but we will see, I am currently using the 5800X3D so ;) Might be perfect to actually just skip 2 gen and go for the Zen 6!
I'm in the process of a new build from my i5 9400 cpu. Looking at the last few gens, this information, and Intel's clear pivot to workloads, efficency, and AI, I can't think of any reason I'd build another Intel rig, rather than switching when the 9800x3d comes out. Aside from possibly trying to build a cheaper 12th gen Intel setup and see what comes up in a couple years if the 9800x3d is somehow a disaster.
So as someone who premonantly games, AMD just makes more sense to me right now.
Intel need to do more to dig out of the hole they made for themselves with the previous gen disaster of inadvertently killing their own CPUs.
They're offering free RMA's and an extended warranty of 5 years. What else do you want them to do?
@jamescarter8311 I got an RMA done. But here is the kicker from a 13900kf kf. They gave me a 14700kf and told me that's the best they can do. So I lose performance and money.
@@ryandraj🤣
@@jamescarter8311 That doesn't help if you have to replace the CPU every year and what happens after 5 years? Intel should have done a recall, something is fundamentally wrong with 13th and 14th gen
@@pixels_per_inch Literally not how anything works...
Something is fundamentally wrong with your understanding of the entire situation.
Hi Jay, hope your health is improving and your family is doing well. I always love the content you put out here. Really informative and non-bias. By far one of loved tech channel. Looking forward towards more. And thanks.
I expect it to be slightly better than the 9950x but when the 9800x3d releases it will get crushed
I wouldn't count on it. AMD has a great history of underdelivering.
@@jamescarter8311 it's true that AMD has a history of underperforming, but so does Intel, so it's a pointless assertion. The crown goes back and forth between AMD and Intel, and has done so for the last 25 years.
i had a 13900k that is now in heaven and switched to 7950x after the win11 fixes my actual gaming perf is the same or close, those few fps intel had caused obligatory degradation I do not trust intel at all since the big problem is the ringbus design
Personally, I'm thrilled with my current 7950x3d and plan on staying here for a while. My husband's 3900x, meanwhile, could use an upgrade....
@@BronzeDragon133 Same, I don't see myself upgrading for at least a 5 - 8 years. This thing is a beast.
Anyone else reminded of the old 386 /486 chipsets? 20 years plus off and we're cycling back to Old naming habits
I'm holding on to my 12700k for a while. I still feel like it has another 3 to 5 generations of good use. But it is interesting to see what improvements intel and amd is doing for their products.
Yeah I don't see a reason to upgrade my 13600K either yet. I am interested if the Ryzen 9 X3D chips turn out well tho, especially if there is no more core parking nonsense.
i just upgraded from i7 4790k to i5 14600k for 200$, im holding on for another 10years
Upgrade to 14700K and your system will run so much faster. I don't recommend 14900K though as it's too hot and power hungry.
i was running a 9900k right up until a month or so ago and until 2023/24 id have hardly known my CPU was 7/8 years old. you'll be good until 2026 at least.
@@tyra18704790K gang, rise up! I’m satisfied with what I’m getting but I know a newer system would blow my mind.
Sorry Jay. 8:30 my brain can't get away from the GPU fans stopping behind you. 😂
Great video and thanks for the initial breakdown
Im gonna wait. I just spent weeks looking at your videos and learning how to build a mid tower PC. Found out about Micro Center cause of you, and also learned they're only 10 minutes from where I live. Ended up getting a I7-14700k/motherboard/48GB Ram bundle for $510. Then got an RTX 3080 new on ebay for $445.
This will definitely hold me off until the new CPU and RTX 5090 come out. Gives them time to iron out the issues before I commit to my monster PC build I wanna really make.
How long is the platform upgradeable ?
You work for 40yrs to have $1M in your retirement, meanwhile some people are putting just $10k in a meme coin from just a few months ago and now they are mutimillionaires.
You are right. Been thinking of going into Gold and crypto currency.
Asset that can make you rich
*FX
*Btcoin
*Stocks
*Gold
*Real Estate.
Sonia Duke is the best.
Sonia Duke is actually a professional. I have been working with her for a couple of years now, and I have been financially stable. God bless America.
She's active on What's Apk
surprised you didn't mention about what's replacing hyperthreading , not sure how they are going to market that as showing no hyperthreading is going to turn some people off not knowing it's getting newer method
It really is the battle of the Ultra 2,85% vs AM5%
CPU meme fight xD
Thats zen 5%
Not am5%
At least Intel is supposedly hitting same performance at half the power, their main improvement been efficiency. I can’t say the same for AM5😅 seems like the same old AM4 on a newer node
-2,85% ***
@@slimjimjimslim5923 is that a joke? comparing am4 chips to am5 saying theres no difference is crazy. You might mean zen, and even then, zen didnt need a 50% drop in efficiency, it was already operating at like half the TDP of intel and that gap is still there, zen is still more efficient. The ultra 5 245kf has basically the same TDP as the 9900x at about 120w) with max socket power of about 160w. Thats a huge difference in performance for the same power draw.
I thought the socket was the same, according to GN. What changes is the standard for the amount of pressure needed to seat CPU on the board, which most good boards should meet anyway.
DIRTY DEEDS DONES DIRT CHEAP!
Yes, it's a Jojo reference.
kekw
@@alentanor its an ac/dc song. Not a jojo reference. smh
@@Gimpy17 There is a stand in Jojo named after that song. It's the stand of funny valentine, the 23th president of the USA in steel ball run.
@@alentanor cool, still an AC/DC reference
Amazing video, very informative, thank you.
more disappointment... and they couldn't even bump up the L3 cache huh?
Is it really so much to want something like a 12 or 16 physical core CPU with big cache, without having to rely on stupid core parking?
Gimme like 16 P-cores from Intel, with the X3D cache from AMD, on a single chip (no CCD parking nonsense) and no hyper threading.
Many techies are thinking that the upcoming X3D options will include at least one chip designed so we do not need to park cores.
@@Valthalin We'll see, the way I read it is that there will be 2 and/or 3 CCD chips, they're putting X3D on at least two CCDs. Until the performance is actually tested and latency is tested, I'll be holding my breath.
From a pure single-thread compute standpoint IIRC Intel is still ahead by 15% IPC, and in many many games, especially the ones I'm concerned about, the limit is the render thread. in VR you have realistically two total render threads at best, and frame pacing is more important than frame rate, and any judder causes absolute mayhem. in non-VR you can run many games at like 300fps no problem, then throw decent levels of graphics, and all of a sudden 72 fps becomes "impossible".
Whilst the K skew is only $15 difference, would there be cooling or other benefits by going K skew?
A video from Steve and Jay on the same day about the same topic? We have truly been blessed this day.
Both Steves too!
Sooo I'm still sitting on a 3570K, is it a good time to upgrade yet? 7800x3d or wait for 9800x3d? The benchmarks i've seen of Intel are running way to hot for my taste. I'm used to 50-60°C on my 3570K @4.4GHz / 83W.
I turned HT off on my 14600KF and was able to turn it up with 200mhz extra on pcores. I run it now on 5.8ghz p-core and 4.4ghz ecore. Got a 2244 single core score on cinabench R23. And a 922 score on cpu-z bench test. It runs stable on the prime95 and cinabench stability tests.
Who cares about a score.. most likely your in game fps went down
Very game dependant. @@cblack3470
So you got a 4% clock increase for -30% HT performance that's amazing
I will be waiting. Interesting to follow the development of the new stuff though.
IF I could upgrade before the end of the year...?
I'd get a 7800X3D; I only play WoW so even my 5800X is probably already overkill, but I to play 4K, high FPS - I still can't find a definitive answer to why I need to pair either CPU with a 40/5090 to achieve it, so I'm holding fire 'til I do see why...
Wow and other mmo loves x3d memory 😉
@@marcinkarpiuk7797 That's good to know; now if only I could get the 4K bit clarified - platform alone isn't enough justification to upgrade.
No reason to go crazy if it is just WoW you are playing. The top end tech is entirely overkill for the game. Running anything at 120fps on a 4k monitor will demand good hardware on the CPU and GPU side if that is what you want to achieve. I'd stick with your current cpu and try something around a 4070 ti super at 1440p, maybe look for a test video up on youtube and check out the graphics and the card(s) needed to get the performance you want.
@@Valthalin I can already do/get 1440p @ 100+fps (RX6800) & have had that for a good few years - I just feel it's time, for me, to want to play @ 4K with the bonus that other stuff I do/watch is available in 4K, as well.
Am in on this gen, been holding up for a new system for a long time! need performance info to make final decisions
Numbers are looking Mid , no upgrade path with Arrowlake , throwing down 1k for no to a little performance is crazy , next gen X3D chips are going to destroy
"No upgrade path"
Source: trust me, bro.
@@tim3172 i guess you miss the part where intel was asked about Socket Longevity and dodged the question, there is literally several rumors about Arrow Lake + Both Unboxed and Gamers Nexus asked the questions and got cold shouldered, there is tons of rumors about intel canceling there ++ and jumping to Lava lake in 2026 which is a whole new socket, its called google go look yourself there is plenty of youtubers talking about it.
@@tim3172don’t know intels past?
😅thanks for the update, appreciate the info.
Intel: Releases Ultra Core
AMD: Hold my EPYC 9005
Currently still on a i7-7820x and looking to upgrade. Not sure though that these Ultras's are worth it..?
Intel still stuck on 8P cores!
Really wish they had higher P-core counts and get rid of the garbage E-cores that do almost nothing for gaming
Yup. E-cores were never much more than marketing. It's how Intel kept up with AMD's core counts. Of course plenty of wimdits buy based on numbers on paper, so it worked for Intel.
You know why that is? Power consumption. Increasing the P cores and decreasing the E cores might give you a miniscule improvement in gaming, but the powerdraw especially in all core workload increases. Personally, I think 8 P cores are fine for now, very few games (maybe you could name one?) profit from more than 8 P cores.
No game in the world is using more than 8 cores so why would you want more?
@@gizConAsh
The obvious answer is because games aren't the only thing people run on computers ;-)
@@a5centthen why complain about e cores doing nothing for gaming
Will this work on the z690-i motherboard or does it need a new mboard to work?