Imagine if we didn't have AMD forcing Intel to actually become competitive, we would still be running 12 thread per socket machines that have insane amounts of vulnerabilities
The sad part is that intel doesn't need to beat AMD, just get close enough that you are not willing to wait for the extra performance while AMD catches up on their production.
I have a feeling the whole situation will just flip and then Intel will be the better thing and AMD will "why? I am already dead why are you killing me"
@@hariranormal5584 If customers don't want to have options in future they should stay with Intel, as usual. But they should not complain when prices will rocket (lake) again, afterwards! :)
Long is good! Now I know the numbers are king but I for one like complexity & some how you explain it in simple and easy to understand ways. But im only a home user who loves tech. Good video and thanks.
I'm just...stunned. Literally had to stop and think about this just to get some words. Intel. What. The. Fuck? Releasing a generation of CPUs, that doesn't even beat the PREVIOUS gen of your competitor, and pricing them against the CURRENT gen competition? That's...um...I want to say *bold*, but how far can you stretch that word before it becomes 'jaw dropingly stupid'? So right now the flow chart of buying an Intel or AMD CPU looks something like... [Do you NEED persistent memory?] -----(Yes)-----> [Buy Intel] | (No) | V [Is AMD sold out of CPUs yet?] -----(Yes)-----> [Buy Intel] | (No) | V [Buy AMD] I really want to be surprised by this kind of insanity. But it makes TOTAL sense when you realize that it all boils down to making the next quarter earnings higher. Screw anything after that I guess, that's a "tomorrow" problem... This is, honestly, the biggest problem with the world today. It's just...exhausting...
Coasting on brand. As long as server partners keep pushing it they can survive, and with some large customers being a bit risk-averse with switching vendors it'll work out for a while longer.
For my FreeBSD 13.0-RC5 backup server 1 Gbps is very hard! It just manages 200 Mbps with a 95% load on 1T of my 1C2T Pentium 4 HT (3.0GHz). Good video nevertheless, despite the huge difference between our server worlds. My "server" has two OpenZFS 2.0 lz4 compressed datapools: - zroot, for the FreeBSD system and my middle age Virtual Machines (VMs) still receiving updates. Here I use 2 IDE HDDs; 3.5"; 250+320GB :) - dpool, for my data; my latest VMs, including Betas and my ancient VMs not receiving updates anymore. Here I use 2 SATA-1 HDDs; 2.5"; 2x320GB. The system is based largely on the electronics of a 2003 HP d530 SFF; 4x512MB DDR (400MHz); a new 600W xTech power-supply (DOP800=$16) and a Compaq Evo Tower with Win 98SE stickers. The system is powered-on for ~1 hour/week to receive my desktop's weekly backup (send | ssh receive). The system is in use since June 2019. The Ryzen-3 desktop runs Ubuntu 21.04 Beta on OpenZFS 2.0 as Desktop OS and as Host OS.
Hi! I came back here to ask you some questions. I changed my target as Silver 4316 to Gold 5320. FINALLY it came out. However, Intel says they will sell it as ‘TRAY’. I think that means I can only buy it from Acer, ASUS, Dell etc. And I found Xeon 5220 sell as ‘TRAY’ too. But I saw some people sell it as ‘Box’ version. And Koit(Intel® Authorized Distributor in Korea) said it is a refined article too. I can’t understand what is ‘TRAY’ version and I want to buy that CPU only. Sorry for bad English. Thank you :)
I am for real interested in the 4310 if there is a bit overly insanely priced mobo available, micro ATX. Full size ATX makes it too big right away and mini itx lacks the slots I need. Nice rig for coding and my radio stuff to replace the now quite old 45 something i5 I have. Also would be better core wise. Yeah I know, Ryzen. I have one for gaming 3300x for now, 5600x or 5700G when I can, and there is supply from big scalpers for not insane prices.
People complaining about the performance. This is how delayed products and technologies look like. And how long lead times from high-level design decisions to products in customer's hands look like. How trusting you would have been in 2016-2017 on Intel 10nm yields in 2020/2019? Would you have made a call to make a huge die and trust that the process engineers will deliver the 10nm process with good yields by that time? This is still part of the fiasco of 10nm delays. Oh, and if you were Intel would have you predicted that AMD will make a 64 core by 2020 before the first EPYC launch? Intel delivers what they planned to deliver in 2020/2019 but got delayed because of bad yields on the 10nm process and having more cores would have meant even worse yields. We are in the year when THIS CPU was supposed to be replaced with the next-gen.
Icelake does not have more pci-e channels, it just released the 16 links exclusively used by omni-path on the last gen for public use. (and Xeon-W3200 seried already did this)
When we did the Gold 6248F review with the OPA100 onboard we were told that the on-package lanes are PCIe-like but technically lower power than normal PCIe due to the short distance involved.
@@ServeTheHomeVideo What they're saying is understandable. Since the differential lanes are very short and under the IHS, there are less cross talk between different lanes, thus less ECC bits are needed comparing to normal PCI-e lanes, thus reducing the power of encoding and decoding. But they also have the normal pci-e mode capability even pins in reserve, as W-3200 already has 64 lanes fully unlocked. We've seen a lot of these configurable muti-purpose controllers and slightly modified pci-e protocols recently. Infinity fabric(GMI), UPI and QPI are all these kind of things.
Intel made smart move with adding hardware accelerators on their server CPU's. This allows them to pull greatly ahead in many actual server workloads which would have otherwise gone to AMD based on just pure CPU performance.
Good point. General purpose computing is great for business workloads running CRM, Accounting, Manufacturing, etc., but specialized functions that don't run well on GP processors would benefit running on dedicated hardware. PC processors cannot do mainframe workloads because they lack I/O processors (we call them channels) and other esoteric features that are actually standard on big iron.
Great video as always... thanks for breaking down the hype train from both companies and their fanbois to the realities of actual performance and what the new features and refreshes mean.
What would be great is to see these two systems under say Database load, Web server load, the same for each and how they churn it out. I mean that's what matters right?
@@ServeTheHomeVideo Yea, I'm saying more of a real high performance benchmark, like is epic or intel faster on the highend side of things. How do they stack up in a heavy workload (given the bench workload is identical) would be fun to know in sql/nginx style benchmarks. I feel it'll matter more what the work load actually is.
We have more of that detail on the STH main site. Hard to discuss KVM virtualization in every video. We have been using these workloads for several years now.
What’s your current outlook on Optane as a whole? Do you see it continuing on or being further developed much or not really? I watched your last video but I have seen a lot of somewhat confusing/contradictory info on what the implications are of Microns saying goodbye
Hi Jay, most of what was cut-off was the "how we got here" and some of the look to the future of servers that we have in the STH main site piece. My thought is that people can skim the written form, but I understand that not everyone wants to read. Perhaps one of these days I will do an update to the 2021 Ice Pickle piece where I can go into the part that was left off a bit more.
If that's possible, could you describe in 2 sentences graphs that you show? I tend to listen to you without watching while doing something else, it would help to have rough vocal description to not miss context of next sentences. I believe there is more people that who watch youtube instead of reading website, that might not have possibility to see the screen. Thanks
The longer version had more tie-ins with the historical reasons why the Ice Lake launch looks as it does and how this launch will impact its future. I re-recorded the middle bit today focused specifically on the new launch so I can go into the other parts (covered in the main site review) in a future video. I know that is not best, but I have slept 5 hours of the last 72 so at some point I just needed to constrain scope.
Intel should segment better. Given they have an existing supply chain for 2011 and 3647 sockets, plus the new 4189. The purpose is to balance CPU, IO capability with platform cost. The is no point in matching a $500 processor to a 8-channel memory and massive PCIe. The LCC die should be 2011 socket only. The HCC die could be 3647 only and the 4189 pin socket should on the XCC die only. This way, the small and medium die products do not need either the silicon for massive IO nor the motherboard implications. Really, multi-socket for the LCC die is stupid. With UPI, the 2011 socket could be 4 mem-channels + 40 PCI-E lanes?
Xeon Bronze was also used to allow large OEMs such as one that rhymes with "GPE" to sell "complete" servers for market share numbers while effectively putting an 8/16GB DIMM and a $213 CPU inside to make it a system rather than a barebones. There is a lot of strange stuff like that in the industry that is not discussed often. The Xeon E-2100/ E-2200/ W-1200 series out-performs the Xeon Bronze line already.
@@ServeTheHomeVideo Intel made a mistake in holding desktop at 4-cores for too long (Conroe to SkyLake). Even so, there is a legitimate need for a system with more memory than the 4 unbuffered (single rank only?) DIMMs + 16 PCI-e lanes, especially considering that no one made a system with 4 x4 PCI-e slots. It does not matter if the current generation 8/10-core desktop top at 4GHz can out-performance a Xeon SP LCC at 2.5-3GHz. The purpose of this platform is some configuration with 8 RDIMMs of 32/64GB each and 8 or so NVMe SSDs. I think the cloud vendors figured out that the lowest cost per core is achieved at 22-24 cores in a 2S system - this is based on list price. Even if mega-customers get a discount, it still turns out the system cost per core amortizing motherboard + case/power supply (have cloud gone case-less?) is optimal at the higher core count.
You're so misinformed. Intel has still over 90% of data center CPU market. Stability is key thing there too, which just happens to be better on Intel. The added AI acceleration and AVX512 instruction set mean that when these are used, the 40 core Intel completely destroys 64 core AMD, even if AMD had better IPC and more cores. AMD CPU's do not have built in AI accelerator or support for AVX512 instruction set.
When I watch performance slide from any company I always presume they are showing the best case scenario. The only way to not get fooled by marketing or "performance strategy"
How coool! Cooper Lake 28-core parts are 8380H and 8380HL, while 8380 (without suffix) is Ice Lake and has 40-cores. Clap. Clap. Clap. Sooo coool, Intel. Not.
Maybe cause Intel CPU's are actually on stock while AMD CPU's are not? I'm not paying scalping prices for AMD CPU's when I can just buy Intel at lower price lol
Now, please show how to make a simple RAID1. And how frequently OMV6 crashes when applying changes, especially after doing someore configuration changes than creating one share. This software is just unreliable
So as a consumer, you should buy an AMD Epyc Rome over Intel’s Ice Lake Xeon this year (64 cores vs 40 cores). Furthermore for next year, you should buy AMD Epyc Genoa over Intel’s Sapphire Rapids (96 cores vs 56 cores). Hey Intel, where’s the technical leadership? Oh by the way, Amazon, Microsoft, and Google are all designing ARM based server CPUs because the Xeon CPUs are too inefficient!!!
I personally enjoy these longer videos, nice to listen to them while doing something different. But great video as always :)
"Listen to them while doing something different"
Just like me 😄
Couldn’t nail it any better
Yeah, youtube doesn't like it.
Thanks Patrick. I really appreciated your technical dive towards the end
these detailed comparisons between platforms are great
Imagine if we didn't have AMD forcing Intel to actually become competitive, we would still be running 12 thread per socket machines that have insane amounts of vulnerabilities
The sad part is that intel doesn't need to beat AMD, just get close enough that you are not willing to wait for the extra performance while AMD catches up on their production.
You are correct on the production side.
I have a feeling the whole situation will just flip and then Intel will be the better thing and AMD will "why? I am already dead why are you killing me"
@@hariranormal5584 If customers don't want to have options in future they should stay with Intel, as usual. But they should not complain when prices will rocket (lake) again, afterwards! :)
@@knofi7052 people who buy these chip usually buy bulk of those so.... well u know,
@@heickelrrx Well, I know, but that shouldn't change what I said, should it?
Long is good!
Now I know the numbers are king but I for one like complexity & some how you explain it in simple and easy to understand ways.
But im only a home user who loves tech.
Good video and thanks.
I'm just...stunned. Literally had to stop and think about this just to get some words. Intel. What. The. Fuck?
Releasing a generation of CPUs, that doesn't even beat the PREVIOUS gen of your competitor, and pricing them against the CURRENT gen competition?
That's...um...I want to say *bold*, but how far can you stretch that word before it becomes 'jaw dropingly stupid'?
So right now the flow chart of buying an Intel or AMD CPU looks something like...
[Do you NEED persistent memory?] -----(Yes)-----> [Buy Intel]
|
(No)
|
V
[Is AMD sold out of CPUs yet?] -----(Yes)-----> [Buy Intel]
|
(No)
|
V
[Buy AMD]
I really want to be surprised by this kind of insanity. But it makes TOTAL sense when you realize that it all boils down to making the next quarter earnings higher. Screw anything after that I guess, that's a "tomorrow" problem...
This is, honestly, the biggest problem with the world today.
It's just...exhausting...
Coasting on brand. As long as server partners keep pushing it they can survive, and with some large customers being a bit risk-averse with switching vendors it'll work out for a while longer.
I had my colleagues say "AMD cache latencies are bad because of the chiplet design!" , to which Patric has something to say at 19'45".
Apple Silicon and Fujitsu Fugaku are exist, why don't use arm chips rather than x86?
Love these videos Pat!
Thanks!
For my FreeBSD 13.0-RC5 backup server 1 Gbps is very hard! It just manages 200 Mbps with a 95% load on 1T of my 1C2T Pentium 4 HT (3.0GHz). Good video nevertheless, despite the huge difference between our server worlds. My "server" has two OpenZFS 2.0 lz4 compressed datapools:
- zroot, for the FreeBSD system and my middle age Virtual Machines (VMs) still receiving updates. Here I use 2 IDE HDDs; 3.5"; 250+320GB :)
- dpool, for my data; my latest VMs, including Betas and my ancient VMs not receiving updates anymore. Here I use 2 SATA-1 HDDs; 2.5"; 2x320GB.
The system is based largely on the electronics of a 2003 HP d530 SFF; 4x512MB DDR (400MHz); a new 600W xTech power-supply (DOP800=$16) and a Compaq Evo Tower with Win 98SE stickers. The system is powered-on for ~1 hour/week to receive my desktop's weekly backup (send | ssh receive). The system is in use since June 2019.
The Ryzen-3 desktop runs Ubuntu 21.04 Beta on OpenZFS 2.0 as Desktop OS and as Host OS.
Curious about the optane vs sgx. What is the conflict with these two items? Is it something about optane that cannot be secured in the sgx framework?
Yay for longer videos! I'd still listen to them even if they were an hour long, as long as there's enough interesting content to fill it.
Thank you for the great video, stuff like the mutually exclusive optane and sgx + the explaination for the latency at the end helps a lot 😃
Also the upbeat attitude is really nice, keeps me watching and engaged throughout the video.
good to see more competition in the server market
Hi!
I came back here to ask you some questions.
I changed my target as Silver 4316 to Gold 5320.
FINALLY it came out.
However, Intel says they will sell it as ‘TRAY’. I think that means I can only buy it from Acer, ASUS, Dell etc.
And I found Xeon 5220 sell as ‘TRAY’ too.
But I saw some people sell it as ‘Box’ version.
And Koit(Intel® Authorized Distributor in Korea) said it is a refined article too.
I can’t understand what is ‘TRAY’ version and I want to buy that CPU only.
Sorry for bad English.
Thank you :)
I am for real interested in the 4310 if there is a bit overly insanely priced mobo available, micro ATX. Full size ATX makes it too big right away and mini itx lacks the slots I need.
Nice rig for coding and my radio stuff to replace the now quite old 45 something i5 I have. Also would be better core wise.
Yeah I know, Ryzen. I have one for gaming 3300x for now, 5600x or 5700G when I can, and there is supply from big scalpers for not insane prices.
People complaining about the performance. This is how delayed products and technologies look like. And how long lead times from high-level design decisions to products in customer's hands look like.
How trusting you would have been in 2016-2017 on Intel 10nm yields in 2020/2019?
Would you have made a call to make a huge die and trust that the process engineers will deliver the 10nm process with good yields by that time?
This is still part of the fiasco of 10nm delays. Oh, and if you were Intel would have you predicted that AMD will make a 64 core by 2020 before the first EPYC launch?
Intel delivers what they planned to deliver in 2020/2019 but got delayed because of bad yields on the 10nm process and having more cores would have meant even worse yields.
We are in the year when THIS CPU was supposed to be replaced with the next-gen.
Icelake does not have more pci-e channels, it just released the 16 links exclusively used by omni-path on the last gen for public use. (and Xeon-W3200 seried already did this)
When we did the Gold 6248F review with the OPA100 onboard we were told that the on-package lanes are PCIe-like but technically lower power than normal PCIe due to the short distance involved.
@@ServeTheHomeVideo What they're saying is understandable. Since the differential lanes are very short and under the IHS, there are less cross talk between different lanes, thus less ECC bits are needed comparing to normal PCI-e lanes, thus reducing the power of encoding and decoding. But they also have the normal pci-e mode capability even pins in reserve, as W-3200 already has 64 lanes fully unlocked.
We've seen a lot of these configurable muti-purpose controllers and slightly modified pci-e protocols recently. Infinity fabric(GMI), UPI and QPI are all these kind of things.
Why you are recording in 24 FPS?
Intel made smart move with adding hardware accelerators on their server CPU's. This allows them to pull greatly ahead in many actual server workloads which would have otherwise gone to AMD based on just pure CPU performance.
Good point. General purpose computing is great for business workloads running CRM, Accounting, Manufacturing, etc., but specialized functions that don't run well on GP processors would benefit running on dedicated hardware. PC processors cannot do mainframe workloads because they lack I/O processors (we call them channels) and other esoteric features that are actually standard on big iron.
"crypto acceleration" refers to something other than AES-NI, right? i would really hope they're not removing AES-NI in some products
Yes. QAT
Thanks Patrick. I really wish Intel stuck to the similar naming convention from before.
Longer videos are good :) Thank You
Great video as always... thanks for breaking down the hype train from both companies and their fanbois to the realities of actual performance and what the new features and refreshes mean.
Always!
What would be great is to see these two systems under say Database load, Web server load, the same for each and how they churn it out. I mean that's what matters right?
Those are our MariaDB pricing analytics and our STH nginx testing (we use our real site hosting data for the test)
@@ServeTheHomeVideo Yea, I'm saying more of a real high performance benchmark, like is epic or intel faster on the highend side of things. How do they stack up in a heavy workload (given the bench workload is identical) would be fun to know in sql/nginx style benchmarks. I feel it'll matter more what the work load actually is.
did you do the benchmarks at 7:27? looks like intels numbers, wtf is S M L? wtf are these numbers?
We have more of that detail on the STH main site. Hard to discuss KVM virtualization in every video. We have been using these workloads for several years now.
@@ServeTheHomeVideo thank you for a great vid. is there any way to re-establish you performance numbers on our own hardware? i see no code here.
What’s your current outlook on Optane as a whole? Do you see it continuing on or being further developed much or not really? I watched your last video but I have seen a lot of somewhat confusing/contradictory info on what the implications are of Microns saying goodbye
It will be here for this generation. I do think that Micron is right that this type of memory eventually moves to CXL if it is not the speed of DRAM.
@@ServeTheHomeVideo What is CXL? Edit: looked it up. Looks like a CPU to device or CPU to memory interconnect standard.
Great video! Why not publish the whole 40 min video for all that are eager to learn all the bloody details - and this 'short' one ?
Hi Jay, most of what was cut-off was the "how we got here" and some of the look to the future of servers that we have in the STH main site piece. My thought is that people can skim the written form, but I understand that not everyone wants to read. Perhaps one of these days I will do an update to the 2021 Ice Pickle piece where I can go into the part that was left off a bit more.
If that's possible, could you describe in 2 sentences graphs that you show? I tend to listen to you without watching while doing something else, it would help to have rough vocal description to not miss context of next sentences. I believe there is more people that who watch youtube instead of reading website, that might not have possibility to see the screen. Thanks
did you take your own numbers at 7:25, patrick? whats HMS? of you took the numbers from the intels papers?
Consider posting the 40-minute version anyway?
The longer version had more tie-ins with the historical reasons why the Ice Lake launch looks as it does and how this launch will impact its future. I re-recorded the middle bit today focused specifically on the new launch so I can go into the other parts (covered in the main site review) in a future video.
I know that is not best, but I have slept 5 hours of the last 72 so at some point I just needed to constrain scope.
Cool
Intel should segment better. Given they have an existing supply chain for 2011 and 3647 sockets, plus the new 4189. The purpose is to balance CPU, IO capability with platform cost. The is no point in matching a $500 processor to a 8-channel memory and massive PCIe. The LCC die should be 2011 socket only. The HCC die could be 3647 only and the 4189 pin socket should on the XCC die only. This way, the small and medium die products do not need either the silicon for massive IO nor the motherboard implications. Really, multi-socket for the LCC die is stupid. With UPI, the 2011 socket could be 4 mem-channels + 40 PCI-E lanes?
Xeon Bronze was also used to allow large OEMs such as one that rhymes with "GPE" to sell "complete" servers for market share numbers while effectively putting an 8/16GB DIMM and a $213 CPU inside to make it a system rather than a barebones. There is a lot of strange stuff like that in the industry that is not discussed often. The Xeon E-2100/ E-2200/ W-1200 series out-performs the Xeon Bronze line already.
@@ServeTheHomeVideo Intel made a mistake in holding desktop at 4-cores for too long (Conroe to SkyLake). Even so, there is a legitimate need for a system with more memory than the 4 unbuffered (single rank only?) DIMMs + 16 PCI-e lanes, especially considering that no one made a system with 4 x4 PCI-e slots. It does not matter if the current generation 8/10-core desktop top at 4GHz can out-performance a Xeon SP LCC at 2.5-3GHz. The purpose of this platform is some configuration with 8 RDIMMs of 32/64GB each and 8 or so NVMe SSDs.
I think the cloud vendors figured out that the lowest cost per core is achieved at 22-24 cores in a 2S system - this is based on list price. Even if mega-customers get a discount, it still turns out the system cost per core amortizing motherboard + case/power supply (have cloud gone case-less?) is optimal at the higher core count.
All good things come in 3. At 7:03 is still a black part.
I need sleep. It is now a "time slot designed as a call-to-action" to check out the STH main site review.
@@ServeTheHomeVideo take care please
40 Cores. Wow, big woop. AMD have surpassed Intel so far it is downright embarrassing. Intel's technology and company is stuck in 2014.
You're so misinformed. Intel has still over 90% of data center CPU market. Stability is key thing there too, which just happens to be better on Intel. The added AI acceleration and AVX512 instruction set mean that when these are used, the 40 core Intel completely destroys 64 core AMD, even if AMD had better IPC and more cores. AMD CPU's do not have built in AI accelerator or support for AVX512 instruction set.
I don't operate/work on/buy servers but I just wanted to see how they compare knowing the huge performance deficit Intel had last gen.
When I watch performance slide from any company I always presume they are showing the best case scenario. The only way to not get fooled by marketing or "performance strategy"
Needs an update
First video was much better ;)
Rough few days getting the main site article out then re-recording this.
@@ServeTheHomeVideo No doubt, a lot of work. This is a quality production! KUDOS
interesting.
I still cant over you having two cs go global elite accounts
Bow your head.
Way too late and still many years old technology... They should make it 2015 or earlier.
How coool! Cooper Lake 28-core parts are 8380H and 8380HL, while 8380 (without suffix) is Ice Lake and has 40-cores.
Clap. Clap. Clap. Sooo coool, Intel. Not.
H for Cedar Island/ Cooper Lake, L for large memory (all non-H Ice Lake SKUs have full memory support). Yes, confusing.
😮 YAY!
Who cares about Intel? Why should I buy their CPU when I can buy twice as fast AMD with lower power consumption?
Maybe cause Intel CPU's are actually on stock while AMD CPU's are not? I'm not paying scalping prices for AMD CPU's when I can just buy Intel at lower price lol
...I definitely would have watched the 40 minute video.
Yea, Ice Lake is a Dempsey? mb
If u'r on Scalable Bronze in relation IL there is a ton of XSL/XCL and u know that, come on. mb
Yea the CPU to system bus is chocking and stalling (?) a bit. How congested? mb
Yea, keep the application on the Optane to CPU bus because if you don't (?) . . . mb
Now, please show how to make a simple RAID1. And how frequently OMV6 crashes when applying changes, especially after doing someore configuration changes than creating one share. This software is just unreliable
lol another time
2nd time is the charm right?
So as a consumer, you should buy an AMD Epyc Rome over Intel’s Ice Lake Xeon this year (64 cores vs 40 cores). Furthermore for next year, you should buy AMD Epyc Genoa over Intel’s Sapphire Rapids (96 cores vs 56 cores). Hey Intel, where’s the technical leadership? Oh by the way, Amazon, Microsoft, and Google are all designing ARM based server CPUs because the Xeon CPUs are too inefficient!!!
Why do you even borther
ARM is the way