@@qasimabdul-aziz283 If a creator created the universe, who created the creator? Who created the creator's creator?? It's a paradox, the big bang and a creator are kind of the same, both are unexplainable paradoxes. Sure there could be a creator, but saying X religion's God is that creator is the problem. If a god exists, it's none of the gods known by humans, or at least none of the abrahamic gods. Those specific religions are just silly......... 🤦🏻😂
Hybrid architecture is likely the future. Not every thread needs 4+ghz. If the thread director and win11 continue to improve their ability to prioritize workloads, this seems like a great way to be efficient.
yes but i also think they went that route due to intel's higher nm. If games can use more than 8 cores why not have the additional cores at full potential. Hybrid cpus could backfire in the future for gaming
The only reason that intel has "Big / Little" is that they CAN'T put all power cores on one die. The cpu would draw north of 400 watts easy. This is just marketing saying "see we have high core counts too!".
Hmmm interesting. I'm not really a first-adopter kinda person, but will be interested on how other people's experience/ reviews of this new Intel thing pans out.
Tbh only reason I'm buying the 12900 is because I couldnt find a 5900X, was going to buy a 5800x but for $300 more I can get a chip faster and cheaper than the 5950x on new technology??? Why not........ I see this as Intel's Zen 1, more competitive chips are yet to come, now they just need to keep these hybrid chips cool + more efficient, thought I'd never see the day Intel having the hotter and more power hungry chips than AMD 🤦🏻
Not worried about building anymore. Prices don’t justify the gaming performance you get these days. Not to mention we just keep getting more power hungry parts which shoots up the energy bill. Just not worth the money in 2021:
Yeah I mean I would only buy a 3070 or 6700 graphics and the cheapest new 8 core chip, getting anything else and they require ridiculous amounts of energy and also they are low end in 3 years when I could buy 2 mid range cards for the price of 1 top end card. But I ended up going with a gaming laptop because it was just cheaper and also is much more power efficient. We will see I do want to build a diy computer eventually in a couple years once things hopefully settle down or there is some sort of crash due to everyone ordering nonstop.
Yep. Buying a gaming laptop has never looked so good. They used to be ripoffs but since you can't get a GPU without buying a pre-built Desktop or a laptop, and pre-built desktops have shit thermals anyway because companies cheap out on their CPU and GPU coolers, might as well get the portability of a laptop.
Apple M1 Max (30 watts TDP for the CPU) in Geekbench 5 uses 5 times less power than a Ryzen 7 5800X and i9-11900K and it's faster in multithread, similar in singlethread.
In a couple years you might as well wait a couple more. And a couple more after that. Tech continues to evolve forever. There is always the next big thing around the corner. Upgrade when you can/want to.
The biggest reason why I'm so hyped for Alder Lake is that Intel is finally trying heterogeneous processing that has been introduced by ARM for a decade now in the form of big.LITTLE and now DynamIQ. More combined power for less energy or workload dependant.
you gotta ask why u need ecores in a desktop i think its stupid why not have skew with all pcores and drop the ecores imagine a 10 or 12 or 16 pcore dumb in my book
But it seems like their efficiency hasn’t improved by that much. Their peak performance goes up to 240W which is terrible. Performance per Watt wise these chips are still bad. Because of this Alder Lake will only see benefits in Desktop systems, Laptops cannot support the power demands of this processor so the performance on laptops is still going to be bad when compared to Intel’s competitors especially when compared to Apple’s ARM chips.
@@HardwareCanucks Alderlake does not need new cooler. Major manufacturers are making brackets for existing coolers. Channels like you feed the misinformation and thus misconception towards one brand or another. It does not help either new builders, or gives uneducated fan girls incorrect information to spread around.
“If there is a task it gets assigned to one of these cores”, this is not entirely true. The task is assigned to a thread. That thread can shift cores because of temperature or branching logic in its code. The improvements in instructions per cycle take advantage of multiple cores and where the instructions and data are stored in the CPU’s cache.
Maybe I'm reading too much into it but I feel like this is something of a coded message for anyone who might be tempted to preorder on Oct 27-28 before the actual reviews come in. As one such person though the only thing I really feel like I got out of it was maybe "DDR4's fine right now, don't sweat getting one of the DDR4 boards."
@@HollowRick Nearly identical to my specs, just lose the K and Ti's. In my case though the eagerness to upgrade was triggered by finally being able to buy a 3070 card and finding it was just TOO DAMNED BIG to fit in my existing case and a series of "If I'm going to this I might as well do that..."s outward. I've been sitting on everything except the big three for two weeks and my eagerness for the wait to have been worth it is quite high.
Wanna explain how a Gracemont core can be faster than a Skylake core? Maybe per watt at the low end? But at the high end, that's definitely not possible.
At the same clock, a gracemont core is supposed to be equal or better than a skylake core in single threaded workloads. Sure when skylake is at 5ghz it'll smoke a gracemont core at 3.7ghz, while also drawing 3-5x more power. I can go more in depth if you like It's important to note a skylake core is much much larger than gracemont
Gracemont is a fairly advanced design and in Alder Lake, while the cores are BASED on Gracemont, the way they're set up with cache, load / store bandwidth, instruction set optimization, etc....well they're very different beasts.
@@pilerks1 Where did you get this information from? Intel's press release and extensive marketing blurb I would imagine. We currently have no idea exactly what performance these cores offer, because whenever Intel discuss it, it is obfuscated by embedding it in terms of 'performance per watt', or 'at the same power', or 'at the same clock', as their previous Skylake microarchitecture. Which tells us precisely very little. They bloody well should give better performance per watt, because they are fabbed on an entirely new, smaller process node. However, that still doesn't mean that they actually outperform Skylake at full tick. In fact, if the multithreaded benchmarks are anything to go by, they most certainly don't. If the leaked benchmarks are anything to go by, then full load, multithreaded performance still trails behind AMD's 5950x, a chip that has been in the market for a year now. And the power consumption?! ..... Please think about this question carefully: if Intel wanted to produce a high performance cpu that could beat AMD's 5950x, then why didn't they just use 16 (dual thread) high performance cores? Or... if these Gracemont cores are so amazing, then why didn't they use more of them? Given that they can put 4 of them in the same area as a single p-core, that would seem to be a better option for maximising all core performance. I don't mean to be critical of you personally, what I am really saying, is that we should be very critical when assessing Intel's marketing blarney, because it is misdirection, not reliable, instructive information.
I don't know... I feel we'll be running in task scheduling between P and E issues at the beginning. This feels like something that needs some time to iron kinks out and do what it's meant to.
Imagine years down the road, both heterogeneous cores and 3d vcache have their kinks ironed out and become standard. This competition just benefits everyone in the end. But I'm still happy with my 3 year old 9700K, won't be upgrading anytime soon (although Ryzen 5000 was really tempting when that came out).
ryzen 5600x here....man get it. couldn't be happier with it :D running also an rx 580 8 gb 16 gb ram and 1tb ssd. I'm literally not looking to upgrade ANYTIME soon. Probably in 3-4 years just my GPU
ASRock has always been the "special kid". I had an ASRock Socket 939 board back in the day that had both AGP and PCI-Express slots when most boards were just one or the other.
I had been on an 8700k for a while I was waiting for Alder Lake but having to get a new cooler, ram, Mb CPU and m.2 drive it's just far too much to spend to get the most out of it at the moment so I picked up a 11700k and MB on sale instead.
I can see a 4 P-core 8 e-core setup or maybe 2 P-core 4 e-core setup Seieing i9 will have 8p and 8e, maybe i7 will have 4p and 8e, i5 will have 4p and 4e, Core i3 will have 2p and 4e, Pentium will have 2p and 2e. And they will sell it as more cores in a package which is true but mislewding because i3 will only have 2 really usable cores for gaming instead of 4 cores in 11th gen
Specs are already out, the i7's 8 p cores with 4 e cores and the i5's 6 p cores with 4 e cores. Not to say that more variations couldn't be released later but those'll be the ones immediately available.
I thought this was boot sequence all the way through until snowz mentioned about 'his channel' and I was like wtf XD XD But either way great vid, and never complaining about more snowz
So is the new performance cores going to be that much better because wasn’t the IPC increase between sky lake and now something like less than 10% improvement? I am guess the the IPC might be better on the E cores, but the clocks will be very low.
Every time I am upgrading my computer it’s right before a big hardware change. Last time was 2013 and skipped the whole DDR3, M.2, PCIExpress something generation. Now I am buying a gaming laptop when next gen CPU, memory, PCIExpress are just around the corner
Should be the Edifier R1280DB's without the mesh grille. I own a slightly more expensive pair so that I could integrate it into a 5.1 down the road, ended up moving them to my PC and love them.
Right now is a good time to stick to laptop’s for gaming since desktop builds are a serious overhead. LGA 1700 is coming with a lot but I’ll be holding out until the shortages ease up.
Intel retained cooler compatibility for over 10 years now, I don't think it's fair to nag them about changing cooler mounts, especially when cooler manufacturers are providing adapter kits for existing coolers.
Yes and no. The way ARM's setup handles the instructions is quite a bit different since the focus of its LP cores is really task dependent whereas Intel's implementation allows the E-Cores to run ANY task, though of course some with more efficiency than others.
0:18 New socket and new memory and compatibility challenges are nothing new for x86, why would that be unlike any other previous x86 CPU? Also... it's not an x86, it's an x86-64 which is a radically different CPU architecture.
Alder Lake (12th gen), Raptor Lake (13th gen) and Meteor Lake (14th gen), will all be LGA1700. Alder lake will be obsolete within 3 quarters, Raptor Lake will be quite the beast and Nova Lake will bring quite a big step forward. What also makes Alder Lake so special is the record breaking power usage when pushed to the limit...
Voltage might not really be a thing to worry about but they can still fit 4 E cores in the space that one P core occupies and 4 E cores will out perform it on multithreaded tasks.
No, this is beneficial in high workloads on PC, especially gaming. Instead of having 8 normal cores focus on background tasks in addition to the game, the new efficiency cores can handle background tasks while the performant cores _(which are better than normal cores)_ can focus on the game. This will theoretically lead to better performance, in anything that's CPU demanding. Now I can finally browse RUclips while rendering animations.
Multithreaded applications, 4 efficiency cores take up the same amount of space as one large core whilst providing a lot more performance. Also, running background apps.
TL;DW - Ryzen V-Cache drops in Feb or March and will likely be very competitive with Alder Lake, and Raptor Lake/Zen 4 will drop at the end of 2022 when DDR5 prices are a tad more reasonable. Skip this Intel 'Tik'
You know what has never been solved/fixed? Games randomly using igpu instead of the discreet gpu. You always see it as a common problem in forums and whatnot. I wouldn't be surprised if this started happening on e-cores as well.
Right. I'm sure Intel has worked long and hard with Microsoft to minimize the likelihood of that in Windows 11, but there's bound to be a couple situations that slip through the cracks, at least at first.
the thing of 128MB of L3 seems like a good idea for consumer applications but idea for database servers. For 3D modeling it will be dependent on how the 3D modeler is coded but as most are now said to be GPU compute heavy not CPU compute heavy Alder Lake CPUs probably will make 0 difference from the previous generations of CPUs. For compile workloads it depends on the code being compiled and the compiler. For many it will not matter as everything needed would have fit into the 16MB of L3 of the before gen CPUs. For games as most use repetitive data for the CPU it depends on what it is coded like but as most newer games mostly run on the GPU it most likely will make 0 difference. For avg user applications it might go a few ms faster but no one will notice. The part i see the most difference in will be database servers and virtual machine servers. Database servers due to needing more than just the small 128MB of space to hold anything in and virtual machine servers as how the software is written isn't written for big/little so it will need a software update to run well. For system RAM access it will be slower then normal but people using VMs are or should be used to random slow downs for no apparent reason.
For Alder Lake well the "break combability with older games" part is the reason why i most likely will not get it unless they release a crack or remove DRM on the older games it breaks combability for.
as we can see from Apple, a large number of e-core is useless. 2 of them are enough to handle low power usage tasks. after the CPU quickly switch to p-cores but at the end, all of this depends on how the system spread the threads among the cores. so nothing really exceptional.
The expected big daddy apple desktop chip coming likely to get 8 efficient cores, on top of 32 performance, and the next gen efficient cores based on m2 is likely twice as fast as efficient cores on m1. Gen to gen efficient cores will have big uplift and also free up less priority task from performance cores, save on heat and power.
Just sucks that all high end overclocking boards will come out ddr5 only, making the upgrade much more expensive if you already own a good set of b-die memory. Even intel 9th gen with good memory oc will give you great gaming performance at 1080p and 1440p for couple years more.
Anyone else think these companies are getting ahead of what can actually be noticeably processed? I have a 12 core Ryzen why would I care? an extra 15 seconds off my rendering? For how much? No dude I'd rather go on a vacation. Hell I'll just go to the bathroom during that extra few seconds.
Sorry guys, but if the leaked benchmarks have any validity, then this brand new, super chip is a complete sh*tshow. Let me explain. This is a brand new, '16 core' flagship processor, which somehow still trails behind AMD's Ryzen 9 5950x in multithreaded workload. That cpu is now a year old. If, as the leaked benchmarks suggest, it consumes 330 watts of power whilst attempting to do this, then it is a complete disaster. The 5950x has been measured at 127 watts under full, multithreaded workload, which is why the exact same zen 3 architecture on the exact same 7 nm process has found its way into 64 core Epyc server cpus. It is very efficient, and it performs very well. If these power consumption figures for Alder Lake are even roughly true, then this architecture is going nowhere. It certainly isn't going into HEDT cpu's, and 28 core Xeons? I don't think so. As someone else in the comments pointed out, if this cpu had 16 high performance cores, which it would need to, to beat AMD's desktop flagship, then it would very likely need 500 watts to do it. Ridiculous. I'm afraid that Intel's promotional and marketing jazz for this cpu is a classic exercise in misdirection. Intel finds itself in a core and performance war with AMD, which it has been losing. Intel would love to sell a 16 core processor which could outperform AMD's flagship equivalent, but it can't. The reason? Simple. Process technology. Or the lack of it. It is just not possible for them to fab a high performance 16 core cpu on their existing 10 nm process node, which could beat AMD's equivalent in both performance AND power consumption. Oh, and price. Just not possible. It would be too big, too expensive, and consume a ridiculous amount of power. Alder Lake is already all of these things, while still not demonstrating a lead over zen 3. So...... this is what they have done instead. In order to persuade you to part with a large sum of money, they are saying "here is our amazing new 16 core processor, based on a new microarchitecture, and built on our new 10 nm process, so it *must* be amazing, (surely?)". Only what they are *not* admitting, is that it is a horribly compromised design, which was forced on them because of the limitations of their current process technology. It looks, on paper at least, like a microarchitecture much more suited to a mobile device, like a high performance laptop, but the power consumption rules out that option. So instead, it becomes a desktop processor, with 8 real cores, and 8 marketing cores. Brilliant! It's a 16 core chip. But hang on, I hear you say, if these Gracemont 'e-cores' are so great (better than Skylake apparently), then why, given that they can get 4 of them in the same space as 1 p-core, do they not use more of them? Surely, using more of these amazing e-cores would result in better multithreaded performance? At the very least, it would mean the processor could run more threads, like 28, or 32, if you exchanged a couple of p-cores for 8 e-cores, or perhaps giving away 4 p-cores in favour of 16 e-cores..... But alas, none of this makes much sense. Only careful and detailed benchmarks will reveal exactly how much an e-core is worth, compared to its bigger brother, the p-core. The fact that you have this hybrid microarchitecture will really muddy attempts to objectively define performance, unless the tools we have can reliably tell us which threads are running on which cores, and at what clock frequency. For anyone considering buying one of these, I would remain sceptical until detailed benchmarks are available. I am certain that this new platform will be an expensive proposition. As Snows says, motherboards supporting pci-e 5 and ddr5 will be very expensive, and that's before you've even bought your RAM. It will take a while for the ddr5 market to mature, and until it does, ddr5 memory will be very expensive and in short supply. Additionally, it will likely offer no performance advantage over fast, low latency ddr4, at least initially. I would guess, that in the current market conditions of distorted supply and demand, that it will take 18 - 24 months before ddr5 gets to be a reasonable proposition. In that time, we will have had two more generations of Intel microarchitecture, and very likely a new socket, or at least a '.2' version of the new lga 1700 type. All in all, it makes no sense whatsoever to buy into this platform. Quick edit: I have just had a quick check on the leaked cpu-z benchmarks, and realised the following: The person who posted the cpu-z benchmarks (@9550 pro) clarified in a twitter update, that the Alder Lake 12900k results were at an all-core overclock of 5.2 GHz on the p-cores, and 3.7 GHz on the e-cores. This is interesting. This all-core overclock resulted in the 12900k very narrowly beating the 5950x in the multithreaded benchmark ( 11,986 points to ~11,900 ), and very clearly beating it in the single threaded benchmark ( some 800+ points to 600+ points ). What can we learn from this? My first thought was that whoever did this test, overclocked the Intel cpu just enough to demonstrate a margin of victory over the 5950x. It is a very narrow, but convenient victory for the Intel chip. However, it came at such a high price, that it would probably have been unwise to push it any further, lest the power consumption got really embarrassing. If the Intel cpu needs 330 watts to do what a 5950x can do in under 130 watts, then that is not a win. Additionally, it is quite clear, that without this 5.2 GHz all-core overclock, the Intel chip would most definitely NOT beat the 5950x in a multithreaded workload, given its very narrow margin of victory. However, I am prepared to accept the single thread victory as an indication of the improvements Intel has made to the ipc of these Golden Cove p-cores. An all-core overclock is not likely to have made much difference in this case, since Intel chips have always boosted their clocks significantly for single threaded workloads. Although, given that it appears that p-cores and e-cores can run at different rates, it is really hard to know at this stage how this plays out in practice. Conclusion: what is your priority? Single/lightly threaded workloads or heavily multithreaded workloads? If it is the latter then AMD still wins by a comfortable margin. And very likely whilst using less power. If your priority is high performance in single/lightly threaded workloads, the Intel cpu would appear to offer the best bet, but very likely at the cost of higher power consumption. However, it has to be said, that if lightly threaded workloads are your priority, then why buy a 16 core cpu?
With only look as a gamer I don't see any necessity to update current LGA 1151v2/1200 or AM4 platforms with 6/8 core CPU's, till prices for VGA do not be dropped atleast at half of the current f**ked up prices. Because a f**king RTX-3060 for 1000$ still stays a f**king RTX-3060 for 1000$, even with new platform with supafast P-Cores & E-Cores CPU and DDR5 memory, card don't produce more FPS.
DDR 5 is going to come out with vary fast memory way above stock specs. Its no differant than before . DDR4 is still 3200 stock, Your points are groundless,
@@HardwareCanucks Point being lets see how the DDR4 and DDR5 are both at stock settings . In Alder lake . We all know DRR5 sets will be out that are DDR5 8000. Here is what DDR 5 Brings . Bandwidth that the next gen processors are going to need . Gpu is going to go Crazy real soon with bandwidth. Intel is simply Giving its clients a system today . That can be cpu upgraded to next gen . and that generation will be good for many years to come. If Meteor is on the Same pin layout as Intel said 3 generations. So get a MB with DDR5 period. Intel has already fired up Meteor silicon and have said its really good.
I call it a flop, we will see same thread scheduling issues as we have seen/still see with zen cpus. heterogeneous designs are simpler and take less time and energy to do a task then a big.little design. I am 100% sure we will not see this design on desktop in a couple of years, heck from what I have heard the xeons will not have big.little design, so that tells a clear picture.
There is a reason why it works on phones, I don't see why it wouldn't translate well to desktops. Xeons on the other hand are a different beast as you don't daily drive them, they're usually bought for a specific use only.
Even if AMD manages to hold the crown, It won't be a wasted generation, even if a failure, which I doubt. It will still push tech, in all of those new directions. But it will not be an ideal outcome for Intel, and arguably the industry, if they don't at least match up in some performance benchmarks, or real world gaming, productivity and general office computing. Here is to the new renaissance in desktop computing, cheers 🥂
One of the main bennifits of having extra cores for low power requirement threads, isnt that "oh this process only needs a 5w core" because assuming a 20w core can get this task done 4x faster than the 5w core, means that it is a wash. Where Alderlake should accel is the reduction of cache misses, significantly reducing the ammount of time wasted loading that thread into and out of cache just for it to park till the next time it needs to do something By having 8 'low power cores' it means these programs can have dedicated cores A similar effect was seen with AMD's 12 and 16 core processors, where in gaming, a 6 core might get the same average frame rate, but a 12-16 core would get much more consistant frame times. This was apparent to me all the way back in 2015 when i picked up a used workstation with a pair of xeon x5690 processors for $200 with no drives or RAM. Even though the single core performance was wAAAY lower than a 4790k, it was still a better expeience because i had 24 threads, instead of just 8. The more stuff i had running in the background, the faster my workstation was over my HTPC/livingroom gaming PC, even though they both had the same model GTX 970, and IIRC the 4790k had a newer version of PCIe
Intel has been the burnt by many fake rumors and sites like this just make it worse. V- ram will help in games but 15% is an outlier, Intels ipc in single core will rule most of 22 before Raptor lake in late 22
It never ceases to amaze me that as a species we’ve been able to take sand (and other materials) and turn it in to something so unbelievably complex.
The very same species who think God sent them a book with talking donkeys...
@@ozgur5443 well, the species is a broad spectrum…
@@ozgur5443 The very same species that believes the entire universe was created without a creator.
@@ozgur5443 and talking snakes
@@qasimabdul-aziz283 If a creator created the universe, who created the creator? Who created the creator's creator??
It's a paradox, the big bang and a creator are kind of the same, both are unexplainable paradoxes. Sure there could be a creator, but saying X religion's God is that creator is the problem.
If a god exists, it's none of the gods known by humans, or at least none of the abrahamic gods. Those specific religions are just silly......... 🤦🏻😂
Hybrid architecture is likely the future. Not every thread needs 4+ghz. If the thread director and win11 continue to improve their ability to prioritize workloads, this seems like a great way to be efficient.
yes but i also think they went that route due to intel's higher nm. If games can use more than 8 cores why not have the additional cores at full potential. Hybrid cpus could backfire in the future for gaming
@@mikeramos91 agreed. But imagine if Intel can have this hybrid efficient cpu at smaller nm which they’ve stated is coming.
The only reason that intel has "Big / Little" is that they CAN'T put all power cores on one die. The cpu would draw north of 400 watts easy. This is just marketing saying "see we have high core counts too!".
@@derek8564 THIS! Exactly
It’s doesn’t work. They are more battery sucker than a Tiger lake. Even AMD Ryzen without that bs hybrid architecture are more battery efficient
Hmmm interesting. I'm not really a first-adopter kinda person, but will be interested on how other people's experience/ reviews of this new Intel thing pans out.
Tbh only reason I'm buying the 12900 is because I couldnt find a 5900X, was going to buy a 5800x but for $300 more I can get a chip faster and cheaper than the 5950x on new technology??? Why not........
I see this as Intel's Zen 1, more competitive chips are yet to come, now they just need to keep these hybrid chips cool + more efficient, thought I'd never see the day Intel having the hotter and more power hungry chips than AMD 🤦🏻
@@BasedAstraea where are you? 5900x is readily available
@@BasedAstraea Why not wait for the updated Zen 3 parts with the 3d cache?
@@sameerkhatri588 yeah for $1000 retail and scalper prices 😂 I'm in the USA
@@DarthChewie isn't that coming beginning of next year? I can't wait that long LOL already took me the whole summer to save up 😂
Not worried about building anymore. Prices don’t justify the gaming performance you get these days. Not to mention we just keep getting more power hungry parts which shoots up the energy bill. Just not worth the money in 2021:
You are right, upgradability is a duel edged sword.
Yeah I mean I would only buy a 3070 or 6700 graphics and the cheapest new 8 core chip, getting anything else and they require ridiculous amounts of energy and also they are low end in 3 years when I could buy 2 mid range cards for the price of 1 top end card. But I ended up going with a gaming laptop because it was just cheaper and also is much more power efficient. We will see I do want to build a diy computer eventually in a couple years once things hopefully settle down or there is some sort of crash due to everyone ordering nonstop.
Yep. Buying a gaming laptop has never looked so good. They used to be ripoffs but since you can't get a GPU without buying a pre-built Desktop or a laptop, and pre-built desktops have shit thermals anyway because companies cheap out on their CPU and GPU coolers, might as well get the portability of a laptop.
Apple M1 Max (30 watts TDP for the CPU) in Geekbench 5 uses 5 times less power than a Ryzen 7 5800X and i9-11900K and it's faster in multithread, similar in singlethread.
@@murderbymodem Zen 3 and Tiger Lake laptops are amazing.
Gonna be interesting. I'll wait a few years before I build my new Intel rig, though. Let the tech evolve for a while, I say.
ya its kinda crazy stuff there trying to cram on a CPU...Heck I still like manual roll-up windows...LOL
Tech will keep evolving. It will not stop. New and better stuffs will keep coming. So there is no point in waiting. Buy what is present.
In a couple years you might as well wait a couple more. And a couple more after that. Tech continues to evolve forever. There is always the next big thing around the corner. Upgrade when you can/want to.
Well there are roadmaps already this decade to make less than 1nm processors. Let’s see if Intel can catch up in the next few years.
The biggest reason why I'm so hyped for Alder Lake is that Intel is finally trying heterogeneous processing that has been introduced by ARM for a decade now in the form of big.LITTLE and now DynamIQ. More combined power for less energy or workload dependant.
Yeah hope they can really compete specially now that the ARM threat is looming.
I really love to see more Competition!
@Matthew Shields I really hope Intel will be able to compete too. Looking forward to more competition
you gotta ask why u need ecores in a desktop i think its stupid why not have skew with all pcores and drop the ecores imagine a 10 or 12 or 16 pcore dumb in my book
It's x86 not ARM and hight nn
But it seems like their efficiency hasn’t improved by that much. Their peak performance goes up to 240W which is terrible. Performance per Watt wise these chips are still bad. Because of this Alder Lake will only see benefits in Desktop systems, Laptops cannot support the power demands of this processor so the performance on laptops is still going to be bad when compared to Intel’s competitors especially when compared to Apple’s ARM chips.
I could of sworn I heard “demon tree” not “Dimitry”. 😂
CoUlD oF
"Demon-tree is taking a vacation, during which he will complete the Abramelin Operation."
I definitely messed up saying Dmitry. Thought I could fix it in post. NOPE.
I was expecting Mike but then Snows shows up. Trippy Sunday!
Crazy, right? I'm back from vacation so will start off next week fresh! - Mike
@@HardwareCanucks hahaha it's all good. Snows is great too!
@@HardwareCanucks Alderlake does not need new cooler. Major manufacturers are making brackets for existing coolers. Channels like you feed the misinformation and thus misconception towards one brand or another.
It does not help either new builders, or gives uneducated fan girls incorrect information to spread around.
“If there is a task it gets assigned to one of these cores”, this is not entirely true. The task is assigned to a thread. That thread can shift cores because of temperature or branching logic in its code. The improvements in instructions per cycle take advantage of multiple cores and where the instructions and data are stored in the CPU’s cache.
hes simplifying it for the noobs. Obviously scheduling is a yuuugely complex topic that you can study for years.
But if you disable hyper-threading then core and thread means the same thing here
Thanks, this was very informative and well presented
Are they trying to mimic ARM SOCs?
Maybe I'm reading too much into it but I feel like this is something of a coded message for anyone who might be tempted to preorder on Oct 27-28 before the actual reviews come in. As one such person though the only thing I really feel like I got out of it was maybe "DDR4's fine right now, don't sweat getting one of the DDR4 boards."
I wonder if it would be worth it upgrading my cpu finally 🤔 I have a i7 8700k and a 1080 Ti with a 1440p monitor
@@HollowRick Nearly identical to my specs, just lose the K and Ti's. In my case though the eagerness to upgrade was triggered by finally being able to buy a 3070 card and finding it was just TOO DAMNED BIG to fit in my existing case and a series of "If I'm going to this I might as well do that..."s outward. I've been sitting on everything except the big three for two weeks and my eagerness for the wait to have been worth it is quite high.
@@rna151 wtf my reply got deleted 😐
Wanna explain how a Gracemont core can be faster than a Skylake core? Maybe per watt at the low end? But at the high end, that's definitely not possible.
At the same clock, a gracemont core is supposed to be equal or better than a skylake core in single threaded workloads. Sure when skylake is at 5ghz it'll smoke a gracemont core at 3.7ghz, while also drawing 3-5x more power. I can go more in depth if you like
It's important to note a skylake core is much much larger than gracemont
Gracemont is a fairly advanced design and in Alder Lake, while the cores are BASED on Gracemont, the way they're set up with cache, load / store bandwidth, instruction set optimization, etc....well they're very different beasts.
@@pilerks1 Where did you get this information from? Intel's press release and extensive marketing blurb I would imagine. We currently have no idea exactly what performance these cores offer, because whenever Intel discuss it, it is obfuscated by embedding it in terms of 'performance per watt', or 'at the same power', or 'at the same clock', as their previous Skylake microarchitecture. Which tells us precisely very little.
They bloody well should give better performance per watt, because they are fabbed on an entirely new, smaller process node. However, that still doesn't mean that they actually outperform Skylake at full tick. In fact, if the multithreaded benchmarks are anything to go by, they most certainly don't.
If the leaked benchmarks are anything to go by, then full load, multithreaded performance still trails behind AMD's 5950x, a chip that has been in the market for a year now. And the power consumption?! .....
Please think about this question carefully: if Intel wanted to produce a high performance cpu that could beat AMD's 5950x, then why didn't they just use 16 (dual thread) high performance cores? Or... if these Gracemont cores are so amazing, then why didn't they use more of them? Given that they can put 4 of them in the same area as a single p-core, that would seem to be a better option for maximising all core performance.
I don't mean to be critical of you personally, what I am really saying, is that we should be very critical when assessing Intel's marketing blarney, because it is misdirection, not reliable, instructive information.
I don't know... I feel we'll be running in task scheduling between P and E issues at the beginning. This feels like something that needs some time to iron kinks out and do what it's meant to.
I really want to see how the older coolers will make contact with the upgrade kits
Should be fine - my Noctua U12S already has various adapters for Intel 2066, 2011, 1200, & 115x sockets, and AMD AM2, AM3, & AM4 sockets.
Really good video! Quite technical but completely understandable! 🤓
Glad you enjoyed it!
Going to give it a chance once the 14th gen is here lmao
Kiinndaaaa down for this. High key really down for this. Please be the next step in computing.
Excellent Thumbnail guys! Really good! 😊👍
Thanks!
"it has a new socket" well I mean this is Intel we're talking about
Yeah, but in all fairness, they actually needed a new socket for DDR5.
Imagine years down the road, both heterogeneous cores and 3d vcache have their kinks ironed out and become standard. This competition just benefits everyone in the end. But I'm still happy with my 3 year old 9700K, won't be upgrading anytime soon (although Ryzen 5000 was really tempting when that came out).
ryzen 5600x here....man get it. couldn't be happier with it :D
running also an rx 580 8 gb 16 gb ram and 1tb ssd. I'm literally not looking to upgrade ANYTIME soon. Probably in 3-4 years just my GPU
ASRock has always been the "special kid". I had an ASRock Socket 939 board back in the day that had both AGP and PCI-Express slots when most boards were just one or the other.
I had been on an 8700k for a while I was waiting for Alder Lake but having to get a new cooler, ram, Mb CPU and m.2 drive it's just far too much to spend to get the most out of it at the moment so I picked up a 11700k and MB on sale instead.
prob could have gotten a 10900k instead
@@ZomgZomg007 Nearly double the price of an 11700k here in Aus!
My boy Snows is here!!
ayy
Yeah, I am happy for the new CPU. It has promise, but I will wait. I refuse to be any companies guinea pig.
Oh.. I can see this P and E core scheduler being a huge problem
I am glad to see all this competition more than anything. Intel is trying. AMD is back with a vengeance. Apple is winning.
Thus we all win. (:
You hit the nail on the head sir!
@@HardwareCanucks Thank you and you are too kind. I am humbled.
Not if you need a GPU. We'll all have to switch to M1 MacBook Pro's soon.
@ricky v If Apple wins, then we all lose.
@@richardconway6425 ??
Apple makes good cpus-> Intel and AMD are forced to compete and make better cpus
Thanks hardware kunucks, no other tech channel breaks down a new CPU architecture and launch like you guys do
Expect a few more of these!
At this point Alder Lake has leaked so much its now an Alder Puddle
lol
I can see a 4 P-core 8 e-core setup or maybe 2 P-core 4 e-core setup
Seieing i9 will have 8p and 8e, maybe i7 will have 4p and 8e, i5 will have 4p and 4e, Core i3 will have 2p and 4e, Pentium will have 2p and 2e.
And they will sell it as more cores in a package which is true but mislewding because i3 will only have 2 really usable cores for gaming instead of 4 cores in 11th gen
Specs are already out, the i7's 8 p cores with 4 e cores and the i5's 6 p cores with 4 e cores. Not to say that more variations couldn't be released later but those'll be the ones immediately available.
Miss lewding?
I thought this was boot sequence all the way through until snowz mentioned about 'his channel' and I was like wtf XD XD But either way great vid, and never complaining about more snowz
what are those speakers?
So.. Can i gen an LGA 1700 MB and put in my DDR4 kit on it and later upgrade to DDR5?
No talk about the DRM incompatibility?
So is the new performance cores going to be that much better because wasn’t the IPC increase between sky lake and now something like less than 10% improvement?
I am guess the the IPC might be better on the E cores, but the clocks will be very low.
Does anybody know the name of the monitor in the background?
Every time I am upgrading my computer it’s right before a big hardware change. Last time was 2013 and skipped the whole DDR3, M.2, PCIExpress something generation. Now I am buying a gaming laptop when next gen CPU, memory, PCIExpress are just around the corner
you know how people looks different with a beard and when dont have one?
this is the greatest change i have ever SEEN!!!
On a side note, I really like those studio speakers in the back, does anyone know the brand and model by any chance?
Should be the Edifier R1280DB's without the mesh grille. I own a slightly more expensive pair so that I could integrate it into a 5.1 down the road, ended up moving them to my PC and love them.
@@austindersch Oh, thank you, they sure seem like these!
Lovely video , everything well explained thank you so much ! Intel ideia sounds good and I am curious too see the execution of it
Thanks for the information!
Wow! Intel New Gen CPU Coming With huge Power!
Right now is a good time to stick to laptop’s for gaming since desktop builds are a serious overhead. LGA 1700 is coming with a lot but I’ll be holding out until the shortages ease up.
Intel retained cooler compatibility for over 10 years now, I don't think it's fair to nag them about changing cooler mounts, especially when cooler manufacturers are providing adapter kits for existing coolers.
Well we've gotta complain about something. ¯\_(ツ)_/¯
Love seeing Snows on the main channel.
So it's like ARM processors with 4 low power cores and 4 performance ones.
Yes and no. The way ARM's setup handles the instructions is quite a bit different since the focus of its LP cores is really task dependent whereas Intel's implementation allows the E-Cores to run ANY task, though of course some with more efficiency than others.
Not even close. Apple M CPU are still better and more efficient than Intel Alder Lake. Don’t waste money on an Alder Lake laptop like I do
0:18 New socket and new memory and compatibility challenges are nothing new for x86, why would that be unlike any other previous x86 CPU? Also... it's not an x86, it's an x86-64 which is a radically different CPU architecture.
Alder Lake (12th gen), Raptor Lake (13th gen) and Meteor Lake (14th gen), will all be LGA1700. Alder lake will be obsolete within 3 quarters, Raptor Lake will be quite the beast and Nova Lake will bring quite a big step forward.
What also makes Alder Lake so special is the record breaking power usage when pushed to the limit...
I hope channel "Boot Sequence" can hit more Subscribers. Btw, name Boot Sequence reminds me Option on my PC's Motherboard
3:43 Hang on, it says 2T per P-core...? What's up?
Performance cores have hyper-threading, efficiency cores don't. So for like the i7 specs with 8P cores and 4E cores it can do 16+4 threads.
P-Core = Hyperthreaded....in MOST cases
E-Core - Single core, single thread
So... this is only useful for laptop?
While in PC space, they will sell the power efficiciency?
Energy prices going up
Voltage might not really be a thing to worry about but they can still fit 4 E cores in the space that one P core occupies and 4 E cores will out perform it on multithreaded tasks.
No, this is beneficial in high workloads on PC, especially gaming. Instead of having 8 normal cores focus on background tasks in addition to the game, the new efficiency cores can handle background tasks while the performant cores _(which are better than normal cores)_ can focus on the game. This will theoretically lead to better performance, in anything that's CPU demanding. Now I can finally browse RUclips while rendering animations.
With the shortage, I really don't think any thing new coming out will matter, not until 2 years from now.
Hard to get excited over unobtainable things.
@@HawkSea yup, scalpers don't play around when high demand is a thing. Computers, consoles, phones. Anything
@@poeticsilence047 and now they're scalping mini fridges
Lol
@@viztiz316 Regular ones or the xbox one? Lol
Don't worry i'll probably grab a few and put them on the bay. Gotcha covered
e-cores... because that's what we asked for.
OEM's certainly did!
One thing I don't understand is What's the use of efficient cores in desktop processors ??
Multithreaded applications, 4 efficiency cores take up the same amount of space as one large core whilst providing a lot more performance. Also, running background apps.
Let this dude do more videos
Aw thanks man!
8:48 you right LOL
CMON INTEL!
Got b150m combo-g, gonna get a new hybrid again :) nice video, learned something new again
Wonder if there's even gonna be one. It would certainly be cool.
What chair is that?
The LEGENDARY Staples Hyken.
i thought i was watching Boot Sequence's RUclips video lol
TL;DW - Ryzen V-Cache drops in Feb or March and will likely be very competitive with Alder Lake, and Raptor Lake/Zen 4 will drop at the end of 2022 when DDR5 prices are a tad more reasonable. Skip this Intel 'Tik'
You know what has never been solved/fixed? Games randomly using igpu instead of the discreet gpu. You always see it as a common problem in forums and whatnot. I wouldn't be surprised if this started happening on e-cores as well.
Right. I'm sure Intel has worked long and hard with Microsoft to minimize the likelihood of that in Windows 11, but there's bound to be a couple situations that slip through the cracks, at least at first.
This is why I’m not an early adopter of new tech. I always wait 2-3 gens.
I am still using a i5 4670k processor. Confused between waiting for alder lake or buying a 5600x.
Just wait. It's not too far away. Plus you can see if AMD might lower prices
Didnt 11th gen release not too long ago like back in March, then 12th gen next month in November :O
If the performance is not upto the mark in irl apps, it'll be called older lake
the thing of 128MB of L3 seems like a good idea for consumer applications but idea for database servers. For 3D modeling it will be dependent on how the 3D modeler is coded but as most are now said to be GPU compute heavy not CPU compute heavy Alder Lake CPUs probably will make 0 difference from the previous generations of CPUs. For compile workloads it depends on the code being compiled and the compiler. For many it will not matter as everything needed would have fit into the 16MB of L3 of the before gen CPUs.
For games as most use repetitive data for the CPU it depends on what it is coded like but as most newer games mostly run on the GPU it most likely will make 0 difference. For avg user applications it might go a few ms faster but no one will notice.
The part i see the most difference in will be database servers and virtual machine servers. Database servers due to needing more than just the small 128MB of space to hold anything in and virtual machine servers as how the software is written isn't written for big/little so it will need a software update to run well. For system RAM access it will be slower then normal but people using VMs are or should be used to random slow downs for no apparent reason.
I dont think that this type of CPU's will be the future, I think that is a move from Intel for the less efficient that their cpu's are
You'd be surprised by how many CPUs utilise a hybrid architecture.
@@thenotsookayguyARM did it better in a hybrid architecture
I only care about one thing and one thing only" Will the 12th gen core i9 be able to run MGS4 at 4k @60fps on Ps3 emulator RPCS3?
Right, right, that's what was missing from Intel, efficiency.
they stilll can chug over 330 watt lol
@@ThaexakaMavro It's tradition, mate.
Like name me a single i9 that can't reach 300 watts when overclocked.
For Alder Lake well the "break combability with older games" part is the reason why i most likely will not get it unless they release a crack or remove DRM on the older games it breaks combability for.
I'll start using Intel again when they finish with the Lake naming
What do you have against bodies of water?
8:50 Hahahaha please more of that inner voice!!
as we can see from Apple, a large number of e-core is useless. 2 of them are enough to handle low power usage tasks.
after the CPU quickly switch to p-cores
but at the end, all of this depends on how the system spread the threads among the cores.
so nothing really exceptional.
e-core is only relevant on laptops and smart phone . on pc we have p states core can shut down or downclock
The expected big daddy apple desktop chip coming likely to get 8 efficient cores, on top of 32 performance, and the next gen efficient cores based on m2 is likely twice as fast as efficient cores on m1. Gen to gen efficient cores will have big uplift and also free up less priority task from performance cores, save on heat and power.
E-Cores and P-Cores... hmm where have I heard these before? 🤔 seems familiar
I thought I had accidentally clicked on a Boot Sequence video ...
haha !
Just sucks that all high end overclocking boards will come out ddr5 only, making the upgrade much more expensive if you already own a good set of b-die memory. Even intel 9th gen with good memory oc will give you great gaming performance at 1080p and 1440p for couple years more.
Anyone else think these companies are getting ahead of what can actually be noticeably processed? I have a 12 core Ryzen why would I care? an extra 15 seconds off my rendering? For how much? No dude I'd rather go on a vacation. Hell I'll just go to the bathroom during that extra few seconds.
Actually snow is very good
Can it beat the M1 ?
Not sure on the i7 or i9 Alder lake. Plan is to keep either for 6 years at minimum when intel 20a hopefully releases.
Funny enough is that most games at 4k are so GPU bound that even if Alder Lake beats AMD it wont actually make a difference in gaming
Sorry guys, but if the leaked benchmarks have any validity, then this brand new, super chip is a complete sh*tshow.
Let me explain.
This is a brand new, '16 core' flagship processor, which somehow still trails behind AMD's Ryzen 9 5950x in multithreaded workload. That cpu is now a year old.
If, as the leaked benchmarks suggest, it consumes 330 watts of power whilst attempting to do this, then it is a complete disaster.
The 5950x has been measured at 127 watts under full, multithreaded workload, which is why the exact same zen 3 architecture on the exact same 7 nm process has found its way into 64 core Epyc server cpus. It is very efficient, and it performs very well.
If these power consumption figures for Alder Lake are even roughly true, then this architecture is going nowhere. It certainly isn't going into HEDT cpu's, and 28 core Xeons? I don't think so. As someone else in the comments pointed out, if this cpu had 16 high performance cores, which it would need to, to beat AMD's desktop flagship, then it would very likely need 500 watts to do it. Ridiculous.
I'm afraid that Intel's promotional and marketing jazz for this cpu is a classic exercise in misdirection.
Intel finds itself in a core and performance war with AMD, which it has been losing. Intel would love to sell a 16 core processor which could outperform AMD's flagship equivalent, but it can't. The reason? Simple. Process technology. Or the lack of it. It is just not possible for them to fab a high performance 16 core cpu on their existing 10 nm process node, which could beat AMD's equivalent in both performance AND power consumption. Oh, and price. Just not possible. It would be too big, too expensive, and consume a ridiculous amount of power. Alder Lake is already all of these things, while still not demonstrating a lead over zen 3.
So...... this is what they have done instead. In order to persuade you to part with a large sum of money, they are saying "here is our amazing new 16 core processor, based on a new microarchitecture, and built on our new 10 nm process, so it *must* be amazing, (surely?)".
Only what they are *not* admitting, is that it is a horribly compromised design, which was forced on them because of the limitations of their current process technology. It looks, on paper at least, like a microarchitecture much more suited to a mobile device, like a high performance laptop, but the power consumption rules out that option. So instead, it becomes a desktop processor, with 8 real cores, and 8 marketing cores. Brilliant! It's a 16 core chip.
But hang on, I hear you say, if these Gracemont 'e-cores' are so great (better than Skylake apparently), then why, given that they can get 4 of them in the same space as 1 p-core, do they not use more of them? Surely, using more of these amazing e-cores would result in better multithreaded performance? At the very least, it would mean the processor could run more threads, like 28, or 32, if you exchanged a couple of p-cores for 8 e-cores, or perhaps giving away 4 p-cores in favour of 16 e-cores.....
But alas, none of this makes much sense. Only careful and detailed benchmarks will reveal exactly how much an e-core is worth, compared to its bigger brother, the p-core. The fact that you have this hybrid microarchitecture will really muddy attempts to objectively define performance, unless the tools we have can reliably tell us which threads are running on which cores, and at what clock frequency.
For anyone considering buying one of these, I would remain sceptical until detailed benchmarks are available.
I am certain that this new platform will be an expensive proposition. As Snows says, motherboards supporting pci-e 5 and ddr5 will be very expensive, and that's before you've even bought your RAM. It will take a while for the ddr5 market to mature, and until it does, ddr5 memory will be very expensive and in short supply. Additionally, it will likely offer no performance advantage over fast, low latency ddr4, at least initially. I would guess, that in the current market conditions of distorted supply and demand, that it will take 18 - 24 months before ddr5 gets to be a reasonable proposition. In that time, we will have had two more generations of Intel microarchitecture, and very likely a new socket, or at least a '.2' version of the new lga 1700 type. All in all, it makes no sense whatsoever to buy into this platform.
Quick edit: I have just had a quick check on the leaked cpu-z benchmarks, and realised the following:
The person who posted the cpu-z benchmarks (@9550 pro) clarified in a twitter update, that the Alder Lake 12900k results were at an all-core overclock of 5.2 GHz on the p-cores, and 3.7 GHz on the e-cores. This is interesting. This all-core overclock resulted in the 12900k very narrowly beating the 5950x in the multithreaded benchmark ( 11,986 points to ~11,900 ), and very clearly beating it in the single threaded benchmark ( some 800+ points to 600+ points ).
What can we learn from this? My first thought was that whoever did this test, overclocked the Intel cpu just enough to demonstrate a margin of victory over the 5950x. It is a very narrow, but convenient victory for the Intel chip. However, it came at such a high price, that it would probably have been unwise to push it any further, lest the power consumption got really embarrassing. If the Intel cpu needs 330 watts to do what a 5950x can do in under 130 watts, then that is not a win. Additionally, it is quite clear, that without this 5.2 GHz all-core overclock, the Intel chip would most definitely NOT beat the 5950x in a multithreaded workload, given its very narrow margin of victory. However, I am prepared to accept the single thread victory as an indication of the improvements Intel has made to the ipc of these Golden Cove p-cores. An all-core overclock is not likely to have made much difference in this case, since Intel chips have always boosted their clocks significantly for single threaded workloads. Although, given that it appears that p-cores and e-cores can run at different rates, it is really hard to know at this stage how this plays out in practice.
Conclusion: what is your priority? Single/lightly threaded workloads or heavily multithreaded workloads? If it is the latter then AMD still wins by a comfortable margin. And very likely whilst using less power. If your priority is high performance in single/lightly threaded workloads, the Intel cpu would appear to offer the best bet, but very likely at the cost of higher power consumption. However, it has to be said, that if lightly threaded workloads are your priority, then why buy a 16 core cpu?
+1
241 WATTS! YIKES WINTER READY! Nice one L0L Intel pulling all their Shenanigans to sell this
With only look as a gamer I don't see any necessity to update current LGA 1151v2/1200 or AM4 platforms with 6/8 core CPU's, till prices for VGA do not be dropped atleast at half of the current f**ked up prices. Because a f**king RTX-3060 for 1000$ still stays a f**king RTX-3060 for 1000$, even with new platform with supafast P-Cores & E-Cores CPU and DDR5 memory, card don't produce more FPS.
Asrock is very much that special kid. Overclocking those non k skylake cpus. With their special bios. When intel told them not to.
God I love Asrock.
Im here for the soothing voice of mike 😂
yo where in the hell is Dimitri ? :P
We're spreading the love so we can get more content out. TEAMWORK! :)
i liked for that ALMOST rant, LMAO!!!
DDR 5 is going to come out with vary fast memory way above stock specs. Its no differant than before . DDR4 is still 3200 stock, Your points are groundless,
There are DDR4 kits WAY above spec too. What's your point? I have a DDR4-5100 kit right here in front of me....
@@HardwareCanucks Point being lets see how the DDR4 and DDR5 are both at stock settings . In Alder lake . We all know DRR5 sets will be out that are DDR5 8000. Here is what DDR 5 Brings . Bandwidth that the next gen processors are going to need . Gpu is going to go Crazy real soon with bandwidth. Intel is simply Giving its clients a system today . That can be cpu upgraded to next gen . and that generation will be good for many years to come. If Meteor is on the Same pin layout as Intel said 3 generations. So get a MB with DDR5 period. Intel has already fired up Meteor silicon and have said its really good.
LevelCapGaming
background music
Good catch!!
I call it a flop, we will see same thread scheduling issues as we have seen/still see with zen cpus. heterogeneous designs are simpler and take less time and energy to do a task then a big.little design. I am 100% sure we will not see this design on desktop in a couple of years, heck from what I have heard the xeons will not have big.little design, so that tells a clear picture.
Don't you mean homogeneous? Alder Lake is a heterogeneous design.
@@rna151 ah yeah, just got an brain freeze, you are indeed correct.
There is a reason why it works on phones, I don't see why it wouldn't translate well to desktops.
Xeons on the other hand are a different beast as you don't daily drive them, they're usually bought for a specific use only.
Wow this stuff is getting confusing. All these instructions nothing could go wrong eh!
Why is there a screwdriver sitting next to him with he handgrip facing in the wrong direction?
@suspicionofdeceit For what?
Even if AMD manages to hold the crown, It won't be a wasted generation, even if a failure, which I doubt. It will still push tech, in all of those new directions. But it will not be an ideal outcome for Intel, and arguably the industry, if they don't at least match up in some performance benchmarks, or real world gaming, productivity and general office computing. Here is to the new renaissance in desktop computing, cheers 🥂
Intel could have replaced low power cores with high power cores...ending up with 500w tpd for cpu alone
Intel learning something with Apple 😂Keep trying
One of the main bennifits of having extra cores for low power requirement threads, isnt that "oh this process only needs a 5w core" because assuming a 20w core can get this task done 4x faster than the 5w core, means that it is a wash. Where Alderlake should accel is the reduction of cache misses, significantly reducing the ammount of time wasted loading that thread into and out of cache just for it to park till the next time it needs to do something
By having 8 'low power cores' it means these programs can have dedicated cores
A similar effect was seen with AMD's 12 and 16 core processors, where in gaming, a 6 core might get the same average frame rate, but a 12-16 core would get much more consistant frame times.
This was apparent to me all the way back in 2015 when i picked up a used workstation with a pair of xeon x5690 processors for $200 with no drives or RAM. Even though the single core performance was wAAAY lower than a 4790k, it was still a better expeience because i had 24 threads, instead of just 8. The more stuff i had running in the background, the faster my workstation was over my HTPC/livingroom gaming PC, even though they both had the same model GTX 970, and IIRC the 4790k had a newer version of PCIe
Best way to build a space heater. at 300 watts + , /Why not use a EPIC to play solitaire.
Good to know
If Alder Lake can't work well with Windows 10 it's gonna be DOA.
Intel has been the burnt by many fake rumors and sites like this just make it worse. V- ram will help in games but 15% is an outlier, Intels ipc in single core will rule most of 22 before Raptor lake in late 22
How are we promoting rumors??