Why is Intel still using E-Cores?
HTML-код
- Опубликовано: 7 фев 2025
- Steve and Tim discuss why Intel has continued to use E-cores when AMD have only been using P-Cores and outperforming them for gaming
Join us on Patreon: / hardwareunboxed
Buy relevant products from Amazon, Newegg and others below:
Radeon RX 7900 XTX - geni.us/OKTo
Radeon RX 7900 XT - geni.us/iMi32
GeForce RTX 4090 - geni.us/puJry
GeForce RTX 4080 - geni.us/wpg4zl
GeForce RTX 4070 Ti - geni.us/AVijBg
GeForce RTX 3050 - geni.us/fF9YeC
GeForce RTX 3060 - geni.us/MQT2VG
GeForce RTX 3060 Ti - geni.us/yqtTGn3
GeForce RTX 3070 - geni.us/Kfso1
GeForce RTX 3080 - geni.us/7xgj
GeForce RTX 3090 - geni.us/R8gg
Radeon RX 6500 XT - geni.us/dym2r
Radeon RX 6600 - geni.us/cCrY
Radeon RX 6600 XT - geni.us/aPMwG
Radeon RX 6700 XT - geni.us/3b7PJub
Radeon RX 6800 - geni.us/Ps1fpex
Radeon RX 6800 XT - geni.us/yxrJUJm
Radeon RX 6900 XT - geni.us/5baeGU
Why is Intel still using E-Cores?
Disclaimer: Any pricing information shown or mentioned in this video was accurate at the time of video production, and may have since changed
Disclosure: As an Amazon Associate we earn from qualifying purchases. We may also earn a commission on some sales made through other store links
FOLLOW US IN THESE PLACES FOR UPDATES
Twitter - / hardwareunboxed
Facebook - / hardwareunboxed
Instagram - / hardwareunb. .
Music By: / lakeyinspiredg. .
When Alder Lake launched, Intel really was proud of their Thread Director. The idea was your PC was truly a multi tasking machine. You could be working on a render and launch a game. Thread Director would detect a game was launched and move the render to the e-core so you could have those P-cores for your game. It works really well. That being said many aren’t using it to its full potential.
actually, windows dont care and still hogged cpu0..
Linux is
Yeah but there's still the problem of Windows.
@@mustaproductionsperez6726 seems better on 24h2
I was burning a single session CD, gaming and downloading on a lan at 100mbit over emule and SMB with a half-emulated RTL8139 ~25yrs years ago on a dual celeron 333&433@83fsb
If you know how risky that was, you know.
I also honestly blame the fact that the only "multi-core" workload RUclips reviewers ever talk about is video rendering.
Rendering on a CPU is the dumbest performance benchmark ever, since no one in their right mind would use a CPU for that. Everyone and their mom is rendering on GPUs these days, because it is just so much faster and more efficient.
What are some realistic CPU multi-core workloads?
GN does blender, zip test and stuff but geniunely how do you benchmark for multicore is there something they can use to test database and server performance or something. Maybe obs software encoder performance too for another use case I don't know.
Also AMD went through the same process when they were the leader in multicore so.
@@leviathan5207 errr... 3d rendering is still done on cpu. Even to this day. So no, lots of people do cpu rendering and are "in their right mind".
Because cpu rendering "it just works". No cuda, no drivers. It just compute.
@@mondodimotoriNo one except people with no GPU at all renders 3D on CPU. It is orders of magnitude slower on even the most powerful CPUs.
I think E cores worked out really well for them in low power laptops. The difference between an 11th gen dual core i3 and 12th gen 2P+4E was massive.
it was a win on laptops
The problem with E cores is they couldn't get AVX 512 to work on it so they just drop the support.
@@ibrohiem AVX-512 is extremely niche anyway, and not really used by consumers outside of some niche emulators.
A Ryzen 5 5000 series cpu matches a i5 12th gen for the price of an i3.Yes the 12th gen is better but all i3s before 12th gen were dogshit for no reason other than having no competition
don't know about that, my laptop with e-cores feels sluggish.
i recall that the main idea behind Intel's implementation of the p-core and e-core is to mimic what ARM's implementation of its big.LITTLE architecture by offering the best of both worlds, a highly performant and a power efficient CPU by assigning appropriate cores based on the role (performance and efficiency)
Guys , those ecores are no slouch…I love the ecores for audio, ableton uses very well both p and e cores during production and render…. But in production its where it shines , i can open a bunch of synths on different channels and it just runs amazing on my 12900k !
Given how fast E-Cores are in Arrow Lake, the question should be why does Intel ship 8 stagnant and worse than last gen P-Cores instead.
Imagine a Core 5 with 16 E Cores, a Core 7 with 32 E Cores and a Core 9 with 48 E Cores and more headroom to run those at 5 ghz. They are Raptor Lake equivalent for like a quarter of the die space.
Now you are eating into the server market
@ in which AMD is dominating
I lost track of what Intel has been doing ever since they started using e-cores
Tell us you are an AMD fanatic without saying you are one.
@the12gaugeshotty i mean, I don't generally use Intel cause Ive had problems with them in the past. Just cause I use AMD doesn't automatically mean I'm an AMD fanatic, otherwise I'd also be using their GPUs
@@the12gaugeshotty i mean its not hard when the only thing intel has been doing is doing alderlake over and over again. 13th gen is just alder lake with more e cores and cache. 14th gen is just 13th gen. arrowlake (lmao) is just testing chiplet design on desktop.
because without them a 14900k would struggle to be on par with a 3950X from half a decade ago.
In multi-core performance, sure. In gaming though, nah not really. But yeah, Intel couldn't have just make an 8-cores (or 10-cores, in the case of the 10900K) CPU at the top-end and then price it at $600.
@@Dhruv-qw7jf And they'd still lose to an x3d chip in gaming... Intel is in deep shit. E-Cores are a bandaid fix to be somewhat competetive in multi core and cranking up the boost clock has proven to be a dead end when it comes to single core.
@@leviathan5207 and in efficency in the majority of all usecases with their efficency cores
@@leviathan5207 why still lose?
Exactly but it's a waste of silicon
Interestingly with Intel 12th gen a channel called "Actually hardcore overclocking" did a funny little test. He deactivated the P cores and overclocked just the E cores to match an equal number of Zen 3 cores. And the E cores were slower and ironically also consumed more power.
E as in die space efficient. Not power or performance per watt efficient. They work okay in highly multi threaded workloads but the heterogeneous architecture makes more trouble than it provides
E cores dont scale well across power like P cores do…
@@harrytsang1501 Boomer Intel excuses.
@@lePurpleDragonCalm down kiddo. He's factually accurate, not excusing them. 😂
@@Kallisto.0 LOL!! Back down Intel engineer. You're gonna need a new job, soon...
E Cores on Laptops are incredible, like seriously basically every laptop uses Intel besides high power ones because of E Cores. Desktops on the other hand? I'd rather have the space be used for a killer APU or a 10P Core system like the 10900K or even 12P cores. Or, Intel could go full HEDT and make a full E Core chip. 48 Cores, would be way better than the 10980XEs they were selling in 2020 up to 2023
I can't imagine a 16 P core intel CPU would come in under a reasonable power budget. They just can't scale the P core within size and power constraints.
One P core is the size of 4 E cores, so replacing the E cores on Raptor Lake would only get them to 12 P cores which would get destroyed by Ryzen 9 in every multi core workload while also being more expensive to make.
I cant believe amd is still keeping up with intel and beating them in most of applications while still using only 2 sockets AM4 and AM5 whereas intel has already done like 6 sockets worth of R&D to beat whatever amd puts out. Really shows who has better employees, 25k vs 100k.
I don't know who has better employees, but I am sure AMD has better management than Intel. Intel fall a sleep for years and gave AMD time to catch up they so badly needed.
@@GreyDeathVaccineAMD is probably a lot more nimble/efficient as a company, always has been as they spent most of their company’s history as the underdog, I’ve seen an interview many years ago that someone said they alway have a better architecture and the only thing that is holding them back is the process node.
While Intel is a Behemoth in size, that is not a problem when they have 90% of the market, and the company is build to function like they have the money from 90% of the marketshare, so once they loose that, it is very hard for them, it is also not something that can be solved by downsizing, not immediately, they need to learn how to compete when they are the underdog.
Interestingly enough, employee size can be a problem in a lot of cases. Since Intel is so much bigger than AMD, it's harder to coordinate among all employees, not to mention Intel has shitass management.
Also, higher employee size is largely due to almost half of those 100K being employed in Intel's foundries. Only a small percentage, that's comparable to AMD's size, works in the CPU division.
Also, it's worth nothing, Intel's new architectures are more so reactionary to Ryzen, than something that was planned from before. So unlike AMD who planned for support for AM4 for at least 4 years, Intel didn't really plan ahead in time for long socket support. LGA 1851 was technically supposed to be their longest support yet (Meteor Lake, Arrow Lake, Arrow Lake Refresh/Panther Lake), but Meteor Lake for desktop was canned, and so was Arrow Lake Refresh, so most likely LGA1851 will remain single generation.
Intel is also in more lines of business than AMD, and they manufacture chips as well as design them. Those are reasons for their larger employee count.
When the scheduling actually works correctly Intel’s hybrid chips make a lot of sense - gaming workloads, even now, tend be to single core heavy and you run into severe diminishing returns for core count meaningfully impacting gaming performance past maybe 8-10 cores. Beyond that you’d be better off with just faster cores in that same 8 count configuration generally speaking or otherwise adding a giant pile of cache onto those 8 cores like AMD has done.
Where more cores matters is multithreaded workloads or E-cores are better suited to low power environments (laptops) or again those multithreaded workloads since you can get 4 of them into the same die space as 1 P core.
By doing 8 P cores you’re getting most of the benefit that gaming will from higher core count and by having the rest of the die space for E cores you let the chip work better for non gaming workloads (or for those gaming + recording + streaming + a million other apps running while gaming).
The real reason Intels chip keep bombing is because:
1) they just can’t seem to get their scheduler to work right - maybe some of this is one Windows.
2) There simply no competing against vertical cache when it comes to gaming. If the 285K had vertical cache underneath like the 9800X3D does it would be a monster. But it doesn’t so if all you care about is gaming Intel has nothing for you.
A lot of people are in that boat where they only game on desktop so they don’t care much about power efficiency or multithreaded workloads. They just want raw gaming throughout in which case nothing touches the 9800X3D parts.
For laptops the hybrid architecture makes even more sense though - if I recall right Intel broke the chip into 3 tiers for their 185 Ultra 9 there where it’s 6 P cores, 8 E cores, and 2 low power E cores.
It’s very interesting stuff and I genuinely like the Arrow lake chips but Intel should not have released them to reviewers with the scheduler being so buggy/unfinished. People don’t often know this because many reviewers do not retest but once Intel/microsoft cleaned up 12th gens scheduling woes the E cores generally started to improve performance in games over no cores active.
I’ve been saying that e-cores ruined Intel, but now I’ll pivot. That architecture should’ve been exclusive to mobile devices like laptops etc.
The best thing that Intel can do now is put Xeon onto ATX, and bring down the TDP levels. Until then I’m sticking with AMD.
They did that with Saphire Rapid Refresh. 128 E-cores or 128 P-cores. The 128 P-cores are faster than the equivalent Threadripper (but they use more power)
Hell no. I have a i5 1235u and it is HELL. Sluggish, worse igpu over my previous i5 1135G7. E cores can go screw themselves, darn cinebench boosters
In terms of performance the e-cores are basically the same as Lion Cove p-cores in Arrow lake. Geekerwans video 英特尔酷睿Ultra 200S评测:无药可救!(Chinese video with subtitles in english) show that the e-cores are basically raptor lake performance, whats holding them back is the cache configuration and latency. So for games like Counter Strike, that are more cache and latency sensitive, the e-cores kill performance.
In my opinion the e-cores are good for intel and what's holding them back for gaming is latency and clock speed compared to the p-cores, since with Arrow Lake the performance difference between p- and e-cores are miniscule at the same frequency. If that is solved, then e-core only chips would make sense.
Yes, arguably e-cores are a better design than the p cores, per area and with arrow lake potentially per watt. An all e-core CPU with all connected to the ring bus and reasonably high power budget would be great nT performance chip, theoretically for decent price.
@@0M0rty it wiuld be a monster. But Intel are idiots. Imagine a 48 ecore monster 285k with proper cache config. Zen 11 moment
@@epeksergastis Thanks to Intel politics, that's not gonna happen. Expect the P-Core team getting the most leeway, and the E-Core teaming having the work their ass off. Soon, the E-Cores are gonna be more powerful than the P-Cores.
Here just to remind top selling DIY means nothing in comparison how many is sold in corporate prebuids. Quick search gives me ratios like 1 to 5 or higher.
Yep. Intel is fine for now.
Touche, me and my colleague has been fighting for our company’s IT purchase to go for AMD laptops, but management wont budge
@@sjneowmanagement can gargle Intel's balls
E-Cores came about because Intel were trying to build a Cinebench ASIC.
I always felt they did it for a lack of a better idea to compete with AMD.
That's basically the only thing Arrow Lake can do well. Run Cinebench over and over...
@@dcikaruga Not lack of ideas but lack of R&D, lack of engineers, lack of direction...
Never was into this hybrid architecture. Disable your E-cores for gaming, period. Problem?, no hyperthreading to balance the loss of cores in case you don't have 8 P-cores (8 enough for gaming without hyperthreading)
@@mustaproductionsperez6726 not in arrow lake
E(conomy) cores make sense alongside P(rosperity) cores.
e cores on arrowlake are actually pretty good
No.
@lePurpleDragon ok buddy, but i wasn't asking
@@nostrum6410 Cope.
Yep, they are the real deal.
@@SaschaRobitzki Enjoy being the only person in the trailer park/cul-de-sac to claim that...
"Real deal" for what, by the way? Explain.
Looking at my 13400f compared to my 5700x3d they show about the same usage in Cyberpunk. Thanks to the e-cores that 13400f has 16 threads, like the 5700x3d. So it should be a benefit compared to a 12400f.
I bought a laptop a week ago with this CPU format. I'm not planning on gaming with this laptop so I'm ok with the 2 P cores and 8 E cores because I can do web browsing and other light tasks but when I want to edit a video it still seems to get the job done well enough too.
Honestly, on the laptop side, I was happy with both AMD and Intel. I had a Legion 5 AMD 4600h which was great in it´s time and now the i9 13980HX which is a different beast, but can also be quite efficient on battery...which was a surprise. The E cores definitely helped.
E-cores are all very nice in theory and all of that, but everything falls apart when you see that the total power consumption using them with the P-cores has nothing to do with that "E" (if by that "E" we mean efficiency), even in their latest iteration.
E-cores are fine on mobiles and laptops but not on desktops.
They're a different kind of "Efficient": die-space efficient.
You can fit 4 E-cores in the space of 1 P-core. It never had anything to do with energy efficiency at all (and I don't think Intel ever made that kind of claim either, just that the CPU overall has improved power efficiency with their latest generation).
@@MLWJ1993Then the P cores are massively oversized.
E cores can’t clock high as P cores but I think E cores is meant for smaller applications that don’t require a lot of cpu power
@@pootispiker2866it isnt when the E core’s performance are like from 5 generations ago
@@sjneow Damn, it's almost like that isn't relevant
I like the E-core concept. My 14600 is like a six-core for gaming, but when tasks can be run in parallel, like certain fractcal renders, that performance is really good. Best of both worlds. Even the monolithic design has certain advantages, like memory access can be more direct, allowing to use smaller caches. I don't even want the E-cores removed for my gaming rig as the E-cores can do those Windows services and other stuff, keeping the P-cores free.
"E-cores" are great. Using "Process Lasso", I assign my browser to the "E-cores" and cpu mine with the "P-cores" on my 13700k. (I have dirt cheap electricity.) I can even play old games on just the "E-cores" and Intel 770 graphics (Batman Arkham Asylum etc). My 5800x3d is great for gaming but not good for multitasking like Intel is.
My 5800X can do the same exact things as a 14600k for less money. The beauty of x86 is that old games work on every x86 cpu. Intel just needs to try harder and they refuse to.
@@pootispiker2866 No. The 5800x is now a good entry level gaming cpu but it simply can't multi-task like the Intel cpu's that have "E-cores" & "P-cores".
@@pootispiker2866 why compare it to a higher end cpu? A 12600k can do what your 5800x can do and better for less money, so can a 7600x 🤣
@@JamesSmith-sw3nk I can guarantee you that i can play VR games and listen to youtube videos while downloading another steam game at the same time. I'll take your word for it that another multicore CPU can also run multiple processes at the same time.
@@XFXGX I was making a point on how stupid the whole multitasking argument is when talking about multi core CPUs. It's stupid because windows will never make two processes fight over a core when there's another with lower utilization, regardless of how performant one is vs another.
To me it initially felt like E-cores were just a cheap and nasty way for Intel to boost their core/thread counts to try and have an edge over AMD in multithreaded workloads that can use pretty much every core.
I do think overtime its evolved to be better, especially in regards to lower end mobile CPUs but I’ve come to like AMDs CPU/APU designs for any laptop that needs to have 6 or more cores simply because the efficiency is nearly unparalleled at that level
Or simplify like I do: P/E cores = BOOMER design.
If you do any tuning you would know the E-cores do more for performance than the P-cores now. And it’s more likely that with time the P-core is phased out in favor of a reworked E-core such that it can be the primary architecture. That’s why you see unified architecture in the new roadmaps being leaked. Likely you see something like Clearwater Forest on desktop eventually as well, where it’s basically a ton of E-cores on the same LLC which would be better for gaming than AMDs 3D vcache because it’s unified.
The E cores on the Arrow Lake are the same as Alser Lake in performance. Since they’re that good know getting rid of them would be a dumb idea. Intel should create their own version of X3D chips to directly counter AMD.
"But.... but AMD is gluing cores... " - somewhere in Intel HQ.
Intel is working with similar implementations to 3D V-Cache, and is set to utilize it in their upcoming server processors, but is targeting gamers second, meaning likely only around Nova Lake period.
(Source: der8auer recent podcast)
Some of the Intel mid end cpu actually make a really good budget all rounder cpu. I have i5 13500. It's a 14 Cores 20 threads cpu with Intel UHD 770 igpu. It's day to day performance is comparable to 13600k. It capable to do relatively heavy premiere, photoshop, vector, & CAD work comfortably. I played older games like GTA V, Skyrim, etc and my personal favorite ps2/ps3 emulator games comfortably. Without discreet gpu. It doesnt go hot like raptor lake (it's an alder lake) and not power hungry. Best value lga 1700 cpu.
I hope it will live long enough to you to be satisfied with your purchase.
@@GreyDeathVaccine it will
@@GreyDeathVaccine Considerin the i5-13500 can be had for around $200-250 in most areas, it's actually a pretty good deal for a productivity + gaming. It gets 7600 level gaming performance, with higher productivity performance than a much more expensive 7700X. Plus, it's not affected by the Raptor Lake instability issues, so... it's pretty good.
E-cores are best suited for tasks that use a lot of cores. Server style tasks for example. P cores are best suited for tasks that use one or just a few cores.
Consumer processors are general purpose processors, so they need to be able to both things.
As we move forward I think we are going to see a lot more segmentation in the market, with gaming orientated processors and productivity orientated processors splitting into two groups in the consumer market. High end work stations and servers are going to have mostly or only E cores.
Intel are trying to position themselves as being the best option for the productivity consumer market, while AMD are focussing on the gaming consumer market. For those two markets we see the 285K and 9800X3D as the respective best in class processors.
Does the 285k beat the 9950x reliably?
@@vyor8837 yeah, with low power consumption and low heat
@@QOX8008-8 no, no it doesn't. And it still pulls more power than the 9950x.
@@vyor8837 keep denying,
it does use more power than a 9950x,
but in idle loads, u9 consume 20 Watts, while Ryzen consume 70 watts
Overall i think E-Cores are a stopgap, i dont think HT is going to be back anytime soon (because why have 2 processes limited on 1 core when you can have multiple cores sharing singular processes at full speed?) They work well on laptops due to portability limitations, they follow the modern mobile phone structure (for low power multi tasking) I feel currently E-cores are a way to bridge the gap of performance from HT, and after that, i honestly think intel wants to get P-core performance in the size of E-cores, and when that happens if it were the future now, the 285k could of been a 32 core CPU in one sense.
Thing is, there is a lot of bloat in a sense on the die as well, so much is crammed into intels packaging and it helps that the tile structure is there, but its so much development on all these things at once, it all takes up space.
E Core is Intels way to say, we cant control heat and power consumption
I have a program that allows me to allocate every app to any cpu core and that way I always make sure which programs utilize the p and e cores. I actually really like the hybrid cpus from intel.
Interesting subject, thanks for making a video about it!
I agree Intel added the E-cores in to remain competitive with AMD. But the E-cores (Gracemont) were basically taken from low power Atom/Celeron SoC chips and designed to perform at low frequencies. So they had to be restrained and the construct with the Thread Director had to be worked out to make the idea viable. I still think this drags the effectiveness/performance of the CPU down and that AMD is on a better course, with their main challenge being to optimize on-chip latencies.
Yes, the e-cores make sense. Almost all computers, 90+%, are idle and waiting for input. The control of the pereference does not (always) require maximum power.
The P-cores can then be reserved for computing-intensive tasks.
Only On thin and light laptops according to my opinion
As long as you can schedule properly, I could see the ideal Intel CPU being 6 BIG P cores and like, 2-4 E cores to handle the OS and background tasks more efficiently. Getting rid of hyperthreading seems like a good idea to get the most juice out of each P core.
In my opinion, getting rid of Hyper Threading was a mistake. It would be better to have the Thread Director and the system decide whether to enable Hyper Threading on a given core or not, depending on the type of calculation and the occupancy of individual execution ports in the core.
Despite all the excuses in the world, I really cannot comprehend "E-cores" on desktop. For laptop? I can understand that. Desktop? No. Just create a good core, not a big one.
I still got a core ultra 7 265 k 😉
Whatever everyone says,
i got it upgrading from i5 12600k,
and couldn't be more happier
i think the real question should be, what is the power and performance of e core vs p core if it is clocked at the same speed with the same amount of cache in arrow lake.
IMO one of the main reason that Intel started the big.Little architecture because they're stuck with their older Lithography that their efficiency and power requirements can't keep-up with AMD Ryzens.
E cores, or Efficiency cores are used for low demand/background tasks. They are still used to improve the overall efficiency of the chip. In the new chips, I understand, they can be individually overclocked or undervolted. These new chips won't change much for normals who don't overclock, but for those who do, it is like a new frontier of complex and tedious tinkering. - kerful not to burn up your chip, though.
Phones use p/e cores too. Just its a better way of handling low power task. The issue is when the e cores become a hard limit for performance
Intel could make a 32 core ecore CPU that would work OK for most people and also use less electricity.
While i understand why they had to go the route with E-Cores, why ditch the Multithreading? With their Ultra lineup.
Atm I have a hard time with configuring them for older software. Windows 10 has an issue figuring out what to do with it. I am trying to make use of them because the company bought i9 12900k just to disable Ecore and turbo boost. I been wondering why do this.
When I tested a couple of software some would perform good with E and P core sync at 3.2 GHz as 16 cores than all 8 P core at 3.2 GHz with HT on. Later on the E cores would drop to 2.4 GHz and would cause a lot of stuttering.
Sometimes I notice Windows would just force the software to use only Pcores and would perform worse than with P cores enabled only. I am at the point just disable it all together. No clue why the company would buy this CPUs just to limit it when they could have gotten the i7 12700k and save money because they are just using 8 P cores.
i think intel could eventually offer an all e-core cpu as they focus on improving that architecture. i could see a 48 all e-core cpu sitting along side a 10 or 12 all p-core cpu for the same socket. intels problem right now (and for the past 10 years i think) is that they havent been innovating and executing to the level they need to in order to get things to market in a timely manner. edit: i think intel would need to go with a higher core count all p-core design due to not having SMT.
all e-core cpu would be interesting but for whom? not for games or desktop use.
AFAIK intel e-cores aren't feature complete/parity with P-cores, differently from AMD zen 5c cores
@@qdeqdeqdeqde well, maybe not in their current form. i think they will keep developing the e-core architecture to a point that it will be viable for a stand alone option for production tasks and desktop use. not something i would personally be interested in, but for some people it may. maybe they bring SMT to the e-core along with some more cache and instructions? the problem is that we dont see them pushing the boundaries the way they need to, so im not too positive on them making any exciting moves.
If multi threaded workloads are important to you, why would you buy a hypothetical 48 core cpu, with 48 very slow cores, rather than a threadripper or server chip?
@@leviathan5207 i would not call them "very slow". the total performance of e-cores in threaded load would be higher than if there would be less P cores. otherwise the product does not make sense. the point of consumer cpus is price lower than server or threadripper.
The fundamental issue with Intel's heterogeneous designs is the OS and the open nature of the PC platform in general. Microsoft have to take much more leadership role for this mixed core approach to be successful if Windows is to ever come close to how iOS elegant scheduling between the different cores. I would rather have the E-cores be exclusively available only for the OS's services and processes and leave the performance cores for the applications alone. This could be a good baseline to further refine and expand the scheduling, since the OS code itself is rarely math-heavy.
If they made a modern 10 core single die CPU that would be potentially glorious for gaming. I kinda miss the 10900K.
because higher number on spreadsheet = better
That intro music is nice!
People complain about E-Cores now but when AMD goes with Zen c cores in the future it’ll be fine watch.
Intel e-cores and AMD c-cores are two different concepts. E-cores has less features and process power than P-cores. AMD concept is to make a full feature Zen core with half the cache.
there are no different between zen c core and regular zen, its just a compact version of it
@@SAslund Not sure about concept, but Zen 4 and 4c cores have the exact same cache sizes, both in L1$ and L2$. To me it's just cost cutting, just like Gracemont cores. Not power-efficient but cost-efficient cores.
@@wahdangun To add to your comment, Gracemont cores really are just a more limited (and worse?) iteration of Skylake, whereas Golden and Raptor Cove are whole new architecures.
I mean their E cores this gen are about as strong as the P cores from last gen. Is it really the E cores or the way they are being prioritized by the OS and games?
I don't see how this can be remotely true. P-core is so much larger than E-core and cache hierarchy is much different.
@@GreyDeathVaccine It is absolutely true. See geekerwans video, where he verifies Intel's claims of Skymont's IPC being higher than Raptor Cove's IPC in SPECINT and SPECFP, in Arrow Lake. It's also not that much behind Lion Cove (Arrow Lake's P-Cores). Lion Cove is just an incredible bloated and bad design, for what it offers.
Problem with ecores is that they sound so great on paper but in reality Windows, many programs and games cant work with them properly. Lets see if 9950x3D is going to be the CPU so solve this gaming/productivity CPU dilema. I want to leave 8 core CPUs without current issues of multicore solutions of the both platforms.
Why, because they don't know how to make a CPU with multiple core. That's why they are stuck in server CPU. They don't know how to make more core properly.
5:52 yeah I mean I still don’t get why Intel spread the P cores around the e cores on the 285K. Maybe for heat and efficiency but I feel like it hurt performance to much because they were less close to each other. Or maybe it’s a scheduling issue idk. Intel has said they would be fixing or improving the scheduling for gaming because at the moment disabling 7 or the 8 P cores gives higher FPS in games so something is wrong there. But anyway even if they come out with a fix well if its even possible to fix it and if its not a hardware issue then I still don’t feel like the even could come close to the 9800X3D.
They just suck in games and don’t cate about that fact either. They have stubbornly said they will still not include any more cache for the next generation, or like not use a kind of 3DVcache.
E cores only matters in Laptops, in desktop intel should've given better iGPU instead of wasting sillicon space on E Cores. Atleast 1.5x - 2x the power of 1050ti.
Still? Aren't they a pretty new introduction?
i have a 3900x doing work with it still. i heard it's the most power efficient out of the ryzen 9 lineups, from some people who were mining some coins with it. should i upgrade to 5950x?
They found a way to do to AMD what AMD did to them at the dawn of Ryzen: beat them in productivity with mohr cohrs since you don’t have a way to win in IPC especially with regards to gaming. Have to see if it pays dividends in the long run. I actually do think they’d only be further behind without them.
AMD is virtually at parity for many productivity tasks with the 9950x.
@ true…I should have constrained my statement accordingly. It’s more accurate to say they’ve achieved parity via increased core count akin to what Zen 1 did against Intel back when.
@@gymnastchannel7372 yeah I'm not buying different CPUs for gaming and productivity,
My ultra 7 with 20 cores absolutely crush any task no matter how heavy it is....
Blender, Octane
Photoshop, Lightroom
and most importantly, Premiere Pro !
love the way that we've gotten to the 'find out what's to blame' part of intel's dumpster fire
E-Core is Intels Word for we can not control consumption and heat
6 p cores is almost always enough as long as you have enough e cores. 8 at least I would say
Only if Intel brains come out of Asses and look why their ancient E5 v4 xeons work so good on games while still running at slower speeds.
Those had huge cache for one.
huge cache = monies spent
I started believing in the big little design of CPU’s because of Apple but for desktop PC’s it just doesn’t make sense.
And unless you control both you hardware and software like Apple I feel like it just make a lot of thing worse. I recently returned my 285K for a 9800X3D and am so happy with that, will be getting the 9950X3D soon for a few more of those performance cores. But now that I am on the AMD platform I am starting to fear then next generation of AMD, it is rumored that they will start implementing e cores with Zen 6. So I hope they do a better job of scheduling and maybe working wind windows and or games and other software to improve how they are used.
I have a laptop with alder lake i5 1235U with 2p+8e cores. So smooth if I played youtube alone. But when there were multiple apps open and running, the laptop really start working hard with all cores fired up. Then everything went south. Less responsive, lagging and jerkiness. I had to create performance to make the issue less noticeable but still noticeable. I guess when you 15 watts cpu, nothing will go smoothly ever.
I'm getting a 5625U cpu soon,it's atleast a 100-150 usd cheaper than the i5 so it was a no brainer choice for me.Can use a 3rd party app to boost the tdp upto 25/30W.You should investigate a bit for your i5 as well
huh, but my 15watt 8 year old i7 7500u doesnt have the same issue as your newer cpu has. Weird
That's interesting. My father has a laptop with that exact same CPU, and it breezes through even with 15 chrome tabs and 5 other applications open. Granted, they aren't the heaviest applications (teams, vscode, spyder, webex), but still.
So u telling that my SUPERIOR i9-12900K is actually obsolete from the day one?
So should i turn off the E-Cores of my 13600k ? Its a Desktop Aorus z790 ax ice 32gb ddr5 Ram.
I have clocking option to turn E-Cores off...
cos they removed hyper thread
I really do not care why intel use p-cores. I care only for gaming performance wich the hardware can give me.That why I have NO intel in my PC case ✌️😄
For cash rich enthusiasts, why not make a super huge 10 P core godlike CPU? People at that level love performance, they don't care about power draw or cost - give then mooor P cores, dammit!
Obvious...under 10W for 265K at idle rather than nearly 40W for my 9900x...I think Intel have to satisfy everyone not just gamers and they have done that while saving huge amounts of power. Under average use conditions where the CPU isn't flat out...Intel wins. Even for gaming...is the relatively small differences to AMD worth it? I wonder if people are just caught up in benchmarks too often. They also fully support Thunderbolt in the new Ultra CPUs and CUDIMMS with onboard memory controllers. I think once Intel optimise their microcode for windows, they will be quite strong, even in gaming.
So true, AMD hasn’t stooped to this marketing gimmick yet. Who wants to run tasks on a “low performing Cores” when they can be run on high performing ones? We need all high performing Cores for gaming or other tasks. I never want to “go slow” for any tasks. 4 ecores are better than 1 ecores, but 4 pcores are better than 1 ecores and 4 ecores and 1 pcore. I still like the idea of one big super core, like say 20 GHz, let’s go to that!
Actually Intel kicks AMDs rear in productivity, so E-cores are useful. The main issue they have to grapple with for Arrow lake is latency, once they correct that in future generations AMD processors will not be competitive outside of gaming.
@@Tugela60 AMD still wins in Adobe Photoshop
ruclips.net/video/XXLY8kEdR1c/видео.html
In Adobe Premiere Intel wins by 3,9%. I wouldn't call it kicking AMDs butt.
ruclips.net/video/XXLY8kEdR1c/видео.html
@@GreyDeathVaccine I think he was referring to not the Ultra high end, where similarly priced Intel processors are way faster than similarly priced AMD ones in production. It's only at the ultra high end (i9 vs Ryzen 9) where the gap is minimal.
Thread ripper has no competitors
@lucasmarciel1527 No customers either.
Intel should remove E cores and put some cache on the E cores die space . Intel is cooked their CPU will never faster than AMD's 3D cache CPUs.
Removing Ecores without using Forevos 3D Stacking is like making an 8-12 Pcores CPU but much much more high cost product because monolithic without performing so much more than now. That's why AMD will not make an High-End GPU in monolithic architecture with RDNA4 because they'll have to sell it much more than the 999$ from the 7900XTX (and they didn't get profit with this price).
The best thing to do for them is to do like they do at the moment and rethinking about the architecture with Intel 200S and optimise it with 300S and 400S like AMD did with Ryzen 1000 and Ryzen 2000 to give the Ryzen 3000. That's one of the reason they lose performance over raptor-lake refresh because they delete HT and they build in Forevos chiplet 3D stacking. The latency is much higher and replace N3B TSMC by 18A Intel.
And also, people forgot that Bulldozer from Ryzen was only an upgrade because you can't do worse than bulldozer. By the way, the day when AMD will have Ryzen obsolete, there will be the same thing that happen to Intel now by going a bit lower. It's simple, AMD already know that hybrid architecture like Ryzen AI HX is lower than actual Ryzen laptop architecture. That's why they don't release yet the same Ryzen AI architecture for desktop.
e-cores is replacing p-cores since p-cores phasing out for e-core improve designed that it no longer need p-cores for gaming when e-cores can now hold on its owned with zen6...you need 4 per cluster times 8 for scheduler ...thats make up the royal core designs...every cluster has 4 e-cores with shared lv3 cache ...meaning there 8 lv3 cache of 32mb...is that possible ?? yes it is since its all on compute die that intel themselves sees there is no need for 3d cache or any cache enhancement is never needed when everything is right front of you ...32mb x 8....that put cache beyond any 3d cache by amount of cluster per lv3 cache...since entire 256mb lv3 cache can be access for each core ...and i haven't begun talking about lv2 when there are 32 of them...with its lv1 for each...nova lake introduce update e-cores that what comes after of it ...it be biggest cache cpu on earth...
Really for gaming 11 CPU cores is enough 11 due to 8 CPU core games, some anti cheat programs wanting 1 entire CPU core to itself, then 2 CPU cores for windows. Why we in the consumer lines are getting 32 CPU core CPUs? marketing in short as AMD will be able to say they have a chip in the consumer line with more CPU cores than Intel does. When they find something else to fight over i am sure they will stop trying to shove more CPUs onto a single CPU package then focus on whatever that will be.
Now who will need 32 CPU cores? well in reality it is 32 cores 64 threads and only workstations running compilers, render jobs on the backend to get the render version of the video for us on the front end that will take like 1 CPU core to run the output but as many as you can throw at it to render it, simulations are usually run on the CPU not the GPU, etc. etc. like that with things consumers do not do outside of the workplace.
Two real experts. 🤣
Says all AMD fanatics that only see gaming and don't grasp PC multitasking capabilities.
In terms of wasted silicon for gaming: Integrated graphics.
Intel should have do something similar to chiplet from AMD at this point or just 3D cache.
Intel monolithic die Tiles clearly is not working
They can't, there's a patent
I hope they improve on the ultra series' tile design. Even AMD had issues with it at first ( bad latency between early threadripper gen dies).
@@blazingCFC no,this time it will be not like Zen 1 moments for intel. At that time intel was complacency they did too little too late to completing with chiplet design from AMD. meanwhile AMD keep making chiplet better and better every generation and finally they surpassed intel. But right now intel don’t have a time for improvement at all because AMD wasn’t complacency like intel did. Also intel got a big problem like cash on hand and funding. Including intel layoff many of their experienced cpu architecture.
the tiles approach is fine, they just didn't execute well
@@nostrum6410 it’s fine as long as Intel have times to improve it but right now they don’t
E-cores are cheaper then P-cores i guess
Copy Apple CPU gone wrong for Intel. Pathetic battery life considering that AMD CPU give better battery life despite only have P Cores.
I think e cores should not be on the cpu but on the motherboard. They are weak and cheap, upgrading them makes no sense at all. This way you could already run your OS for server or basic debug usage without a expensive cpu and when you add a cpu you get the pc power you want without paying extras for cheap weak e cores again for the upgrade. Now that this is public you can't patent it.
Buildzoid doesn't like the e cores because they sabotage uncore overclocking by simply being enabled.
_I don't like them_ entirely because they mandate the use of Windows 11, since Intel won't attempt to fix the scheduling issues at all with the chips in Windows 10.
Blame M$ not Intel (I am not Intel fan, recently bought Ryzen 5900X).
@@GreyDeathVaccine Considering that nobody's made a sound card worth using that is supported in Windows 11, the fact that intel wants to *require Windows 11* to use their CPU's was already a point against them.
@@GreyDeathVaccine It's also Intel's fault, you can't just blame MS for not implementing scheduling for Intel CPUs. It's also Intel's job to make sure to implement proper scheduling across all versions of Windows.
they should make the e cores really good at integer and bribe devs to make integer optimized games :))))))
Economy cores.
e-cores are great
E-Cores should only stay in Laptops.
No, it also makes sense on a desktop. Most computers wait for input and only need high computing power for a short time.
@@mibnsharpals Not with too much E-cores, 4-8 E-Cores are enough. You might not like Apple but their hybrid architecture philosophy is better.
@@mibnsharpals Yeah... as long as there aren't 16 of them. Intel can only afford do have less E-Cores than P-Cores in their CPUs if they improve the size and IPC of their P-Cores.
This may be a really daft question and i know nothing about tech.....but if 4 e cores are the size of 1 p core....would there be a way for the 4 ecores to act as 1 core
So... Multi-Threading? In workloads optimized for that the E-cores do what they are supposed to (cinebench for example). But any workload optimized for parallelization is better of being done on a GPU anyways. And rather than trying to split a task into four and piece it together at the end, which would add more latency and processing time as well as complexity, just let a single fast P-core handle it and be done with it.
There would be, there something called rentable unit that Intel is working on that do exactly what you said, without all the latency comes with dividing tasks into multiple cores. But they canceled it. Their internal politics force the only guy who working on it resigned.
No.
thats what rentable units was supposed to be
Ironically I turn off the ecores for gaming. Never believed in the hybrid architecture, had a bit of FAFO, even on 12th. I don't need to build for a while but next will be AM5.
E cores seem dumb. AMD is using less watts without naming cores as efficient 😂
Didn’t the e core come along with intels big.little architecture?
Intel try manufactor cheap CPUs with less cores, less heat problem to make more profit. It has turned more and more into Mobile CPU
Soup rim. That is all.
Hey HUB! Why do people use CPUs for anything other than playing video games that I like?!
Sales of x3d chips indicate the opposite. In 99% of cases, a home computer is used for games
@@lizardx6504 No one should be making any other chips, they are useless!
Noooo. Es Tomas Rebord! Tomas Rebord no me cae bien pero sus analisis de CPUs son bastante sesudos.
Lmao its conception was from before Jim Keller and Pat Gelsinger era. Jim and Pat hate them and its going away slowly now. Panther Lake might be the last that has them.
But Jim left company. His ideas were scrapped.
Nova Lake will also likely have them. According to MLID, if you believe him that is, The next gen after Nova Lake is supposed to scrap them, in favour of P-Core only with high PPA, PPW and PPWPA.
Monolithic will always be the best for gaming.
Tell that to 9800x3d then
@@Riyozsu
You do realize that if the 9800X3D was monolithic it'd have better 1% lows right?
@@Riyozsu9800x3d is monolithic only 1ccd
@@guhasegawa7834 But still have to go to IO die to feed itself. Don't remember correctly but I think it have to cross at least 10mm. This is not small distance.
@@guhasegawa7834
its single-ccd but not monolithic, monolithic means the memory controllers, cache, sram chip, etc. its all on the same die, the 9800X3D still has chiplets.
If the 9800X3D was monolithic and had 3d stack it would possibly be the most brutal performance leap of all time, you would've seen 30% 40% performance jump over the 7800X3D.
I think intel should have stayed with 8 ecores more then enough. 16 is dumb
I prefer the P+E core setup.
Games and main processes on the P cores and background tasks on the E cores.
On raptor lake, they changed the way they wanted the setup to work and they use the E cores as additional processing for games. The main downside is all the E cores are powered up so you have high power consumption.
Any process that isn't a game or something in the foreground is on the E cores allowing your game (most likely single core to 3 core) to have a single core all to itself for maximum processing speed and not in the way of tons of other processes like on AMD since it's sharing cores with the background processes.
The Monolithic die is flawless compared to the chiplet design which has it's own issues as seen in AMD's cpus and now Arrow Lake.
Any AMD cpu is capped out at 8 cores during gaming as it shuts the second CCD off but intel keeps all cores on and allows smooth operation during any task.
Depends on the game. In a game like Minecraft, both CCDs light up to render chunks