It's impressive as hell. If they start to code software to take advantage of this huge L3 it'll make our current chips look like dual core by comparison. They'll lose so many clock cycles going to system RAM it would be obscene. I've been looking forward to this for years but it's going to be a while before they code for this as mainstream.
@@thehen101 he's done the opposite in every instance other than those desiring absolute maximum framerates for gaming, In which case yes: you buy intel, with supporting testing showing precisely why. As a result, you're either too stupid to understand context, have too short an attention span to have actually watched the video, or are being completely disingenuous. All 3 are pretty bad. The fact that would be the case and you posted that comment anyway? Even worse.
You have the best descriptive videos, period. My OCD does kick a bit when I see the curtain rod above your right shoulder, please move it up just a couple of inches, to remove the grey backdrop.
Epyc with this could very well be insane and no slouch for desktop/workstations either. If this is as good as the last few years AMD knocking it out of the park once again.
AMD was doing L3 cache back in the K6-3 days. If the motherboard provided proper support, the onboard cache got demoted to L3. Super Socket 7 motherboards could be had with up to 2MB of cache.
AMD flexing FSR on one one of Nvidia's own cards is awesome to see. The fact that AMD competes so well with Intel and Nvidia is astonishing, considering either of it's competitors are multiples larger than it.
@@dreamhackian4864 My bad, I didn't realize you've been using it this whole time. Competition is what we need in this space, it'll take time to mature, just like every other new technology.
@@robertjif6337 Just because it fits in cache does not mean it boots from cache. Still needs to load in from slow SSD. Though a 7GB/s read speed may help there.
I mean, with how big the current cache already is, a lot of retro games can already be ran on cache. Now the goal is to run an OS on that cache. Would be the most responsive experience ever
Actually, now that I thought about it, probably it wouldn't make that much of a big difference as we will be IO bounded for an OS very very quickly so it maybe won't be that much better.
Nice coverage Gamers Nexus, and this is just the start of that technology, expect 3D stacked CPU chiplets to follow. AMD's engineers have to be getting their tech from some downed alien ship or something like that :P
Not very likely. They stacked a low-heat producing part ontop of a high-heat producing part. Cache is not easily affected (much) by heat however. Stacking core complexes just leads to increased production costs and internal heatspots.
@@kjeldschouten-lebbing6260 The important part in how they did it though was direct copper with no soldering, its all in the process, and what she highlighted is incredible and is certainly a technology first, stuff of fantasy made real.
@@kjeldschouten-lebbing6260 You are right, that's why if this is gonna happen it's gonna be server CPUs with low per cor power draw. I'd assume you could have a slightly underclocked epyc processor with 128 cores, which would be absurdly space efficient, and that's a very important factor considering how many servers are standing in expensive cities.
@@tigran914 if its the max the chip generation can handle at roughly the same exact or lower cost (due to board partner mass order discounts), why? That'd just be silly. It sounds like you haven't thought that one through all the way.
@@tigran914 They'd be stupid to not give you the option, but the fact is that it's where we're headed. It will essentially be a high-bandwidth L4 cache if you add more RAM.
3D Cache is amazing, but the FSR makes me get a FreeSync vibe. DLS may be a bit better, but FSR being open source and more liberal in access may give it an advantage if the quality and support can keep up.
They clearly doing same strategy as freesync. Offer/enable some features to competitor products (albeit basic one). To entrance / incentives the manufacturer / game developers to include them
Between this and DDR5, it sounds like AMD _really_ doesn't want Zen 4 to be data starved. Or in other words, they sound very confident in how fast Zen 4 can process instructions when well fed. 2022 is gonna be a good year for upgrades.
Inflation + supply mismanagement will make it so that only select few get anything. No one will be able to upgrade. I used to be optimistic but it's just not realistic anymore.
@@MundaneThingsBackwards the shortage is literally everything semiconductor related. Ford is shutting down some of their plants because they cant get enough chips for their cars. Its probably not going to change until demand goes down or more chips are able to be produced
Starting up with the statement that it is "one of the most impressive we've seen in a long time", I'm going to read up on the docs for the V-Cache. And that up to 192 MB L3 cache from the title alone is insane.
I don't read into this as wow 192Mb i see this as 96Mb per CCX witch means EPIC get's More EPIC that's a Crazy 96Mb X 8 = 768Mb that is going to be a tasty monster really like my TR Pro's so I can't wait.
Yes, the 1060 + FSR was a power move so I'm thinking AMD is pretty confident with the technology. Not showing a list of games that have it implemented is a little disconcerting but I think that the GTX crowd might be willing to wait a bit to get a free upgrade. I know my 8GB RX570 is amenable.
I've been wondering if with the close ties between Microsoft and AMD and the timing of Super Resolution going live just over a week after Microsoft's big E3 shindig that there might be some sort of correlation there. I've always thought that Flight Simulator coming to console would have to use super resolution. E3 might be a time for Microsoft to finally announce the console release date for Flight Sim, show it off, and show it using AMD's fidelity fx toolset including super resolution. I'm sure there are other games in Microsoft's back pocket that they'll show off using fidelity fx as well.
Given that FSR appears to be in a very early stage, I wouldn't be surprised if developers aren't 100% set to use it. Imagine AMD advertises that your game will feature FSR - basically to promote their own product - and in the end you have to drop it because it's not ready yet. Also, the 1060 demo looks quite blurry with FSR on.
@@ethanol89 I suppose that we will all find out the state of FSR on June 22 when it's released. You do realize that you are describing DLSS when it first came out. Except that it was only available to RTX cards. DLSS wasn't ready or implemented in that many games and was blurry. At least now a 1060 will have the opportunity for improved performance. FSR is open source, if AMD falls of the planet tomorrow I'm sure someone would take it on. I'm guessing adoption will drop like dominoes because if I was a AAA game developer or otherwise and wanted to be successful, FSR would give me a larger available market than usual.
@@ethanol89 I believe devs will be more eager to implement FSR than DLSS because of 2 reasons 1) it’s available on both Nvidia(even GTX GPUs) and AMD GPUs instead of DLSS (only available on RTX cards), looking at it from a dev perspective, if I have to choose between two technologies, I would choose the one which will be available for more People 2) it’s open source
No one bought it cuz it wasn't put on any products, except the ones no one wanted. I personally expected every GPU to have HBM2 by now... But here we are with GDDDDDRR4256XYZ
@@JosiahBradley im to old to remember that Quad once called a High end gaming cpu and you dont need more than four core, and we stuck a decade stagnation of intel 😂
@@GundulmuGaming my first CPU bought for gaming specifically had 256KB of L2, L3 wasn't really a thing yet and had 256MB of DDR. Thinking that future CPUs will have as much SRAM as my first computer did from DDR is a huge leap.
Jensen Huang: "To all my Pascal gamer friends, it is safe to upgrade now" * AMD brings out FSR super resolution which supports even Nvidia GPUs * Me with my 1080 Ti: No, I don't think I will. 🙃
AMD is going to fight for the HPC market with this huge cache. It's still dominated by intel but cache is a crucial thing for scientific computing. I wonder how much slower this cache is (or how much faster compared to RAM).
Well, Dr. Lisa Su said it already, it supports more than 2TB/s of bandwith. Just to throw in some numbers, HBM2 is able to reach 256 GB/s memory bandwidth per package.
As stated, exceptionally fast. From memory, the real issue is SRAM takes up a lot of transistors per bit, so it takes a lot of die space. Which is kind of solved by adding it from another wafer, less chance of broken memory. Still expensive per megabyte.
Whenever you want to complain about AMD raising prices, remember how much they're actually advancing the industry. Intel reamed you with 4c8t forever. AMD is giving insane cache and inexpensive 8c16t APUs.
@Patsk88 "Whenever you want to complain about AMD raising prices" Yeah AMD fanboy, corporates is just corporates. Its not like AMD making CPU to save PC world, to makes charity or whatever the bullshit it is, they are just still corporates who trying so hard to makes money. The hypocrites of AMD fanboy like you need to be disappear, it just bullshit excuse. The reason AMD still improving their tech is because they haven't fully beat Intel like Intel does with Sandy Bridge, if AMD on level with Sandy Bridge success with such huge market even beat Intel market share then AMD will be just like Intel before.
@@runninginthe90s75 There's no fanboyism when someone states a fact. Intel didn't push the industry forward when they were claiming that's all they've done. AMD actively tried to push the industry forward at all times instead of stifle their competition like their competitors did to them and many other companies over the *decades.* If you knew the history of companies like Intel, NVIDIA, Apple, Microsoft, etc., you'd understand just how destructive things have been behind the scenes, and question why these companies were even allowed to continue operating in the market.
@@runninginthe90s75 Certainly, that's fanaticism, just trying to justify every single thing someone's doing by using any angle they can find to put them under a good light of perfection. I also suspect it's one of the cases where someone just values money over anything else and they even idolize it.
@@dounin8876 Agreed. A price increase is a price increase and price increases are still bad for the people making the purchase. Trying to change the angle to say something that's bad for the end user is a good thing is 100% fanaticism. Also, you know, minor thing... AMD raised the prices BEFORE they (maybe) advanced the industry with this 3D v-cache (we don't know because it's not in a product yet). So that whole argument falls flat on its face anyway.
@@monkeslayer-km5ho It's different tech, not just in name, but in technical method. Foveros uses microbumps. TSMC's method uses TSVs, and is much better suited to what AMD is doing here.
@@adhahanif9792 Then AMD fanboy lords should thanks to TSMC not AMD. Also its pathetic to see how AMD fanboy praised too much about their 7nm CPU but they forgot the 7nm chip is made by TSMC not AMD, while AMD just design the architecture. Let AMD produce their Ryzen on GloFo then current Ryzen will be no where good as is it.
Pretty much. AMD will have to keep innovating beyond just die shrinks ahead of Intel. Intel has always been able to get way more performance out of their nodes than AMD.
Hey, are we getting even more Infinity Cache on future AMD GPU's with this? Assuming the extra cost to stack the cache die on top isn't too much, they could add more Infinity Cache without taking up so much die space, right?
They could probably adapt it but the interfaces are quite different. CPU has 2x 64bit memory channels, GPU has 8x 32bit memory channels. Everything is different but there's no reason they can't use the same concept.
@@glenwaldrop8166 You're comparing DRAM interface channels against the standard SRAM cache. The infinity L3 cache in in the middle and interfaces way different than DRAM, it's the same as a CPU in that regard
2:37 - if people from 2 independent groups can effectively interact with each other better than people inside your single group, then you're doing smth horribly wrong.
it's just corporate structuring in a nutshell.. to many people in charge that don't need to that's passed off as "accountability" when really it's just lets see who back stabs the other first to kiss the CEO's ass.. if you need more than 1 person in charge of any one division then you have a problem.. i deal with it every day and it's fk'ing cancer.
Desktop Ryzen seems to often be the testing grounds for a lot of AMD's tech. They know consumers that audience that buys DIY Ryzen are willing to put up with more BS than the average consumer.
@@Eidolon2003 I'm not putting any money on that making a dent. Thats only going to improve effiency for laptops and reduce power, not make it faster. People have been saying every year for the last 6 that Intel will come out with some secret weapon and become king again and it just hasn't happened. They've been caught with their pants down and have been playing catch-up sense.
@@Datttsnake Reducing power draw is exactly what we need right now, with everything getting more and more power hungry every generation. It's getting out of hand to be honest. Even if it only makes sense in laptops, I think it's what x86 needs to stay competitive with ARM chips like Apple's M1 design in the laptop space.
@@Eidolon2003 I think it's interesting, and I do agree we need power to go down, but honestly I would say that needs to be more for GPUs as current GPUs are insane power hogs versus the 180wats of a high end CPU. I do agree we need to lower, so hopefully it takes off.
The vertical cache is probably the most interesting change in a long time, and I work in software and spend about 15-20 minutes for the initial compile before hot reloading kicks in, so definitely going to be interested in upgrading next year if the price isn't too out of whack.
Now you know what I usually have to duel with living in Asia, exciting news usually roll out at midnight, and live stream begins at working hours. But, exciting tech is always exciting, 24-7.
Just got my signed blue ITX mousepad yesterday, and your coverage of Computex is just another example of why I was so happy to support this channel! Your consumer-focus is such a boon to this industry in keeping companies accountable.
Cant wait for the clickbait videos saying the new apus for 2025 are going to have 3d stacked gpu's on top of the cores, making apu's that are faster than a 3090!
@@glenwaldrop8166 ddr5 is not even near enough for that! Ddr5 is good move to right direction, but any low end gpu will have more memory bandwide that ddr5 can give apus!
@@haukionkannel a half a gig running at 2TB/s would be a nice boost. Look at what RDNA2 is doing with 128MB. 50GB/s won't be enough for a proper GPU but it'll do quite well for an iGPU.
Zen has always been ridiculously hungry for memory transfer bandwidth, so it does make sense they would do everything in their power to offset the delegation to main memory. Buildzoid has ranted in particular about this quite a lot, I wonder what he thinks about this move.
AMD was incredibly smart in appointing a former engineer as their CEO. Best decision they've ever made. Dr. Su is amazing and should be considered a role model for young women all over the world!
Sounds like Edram on Broadwell, to bad intel dropped that ball, just as they got greedy from a lack of innovation in core counts... AMD picked it up tho and good on them.
@@BroNapartay Its different, I had an i7-5775C in the past. That EDRAM appears in CPU-Z as L4 cache. It resides in a separate package on the same substrate as the core.
3D V-Cache was definitely the star of the show, really looking forward to seeing more information on it and eventually see it in action on released products.
@@TrueThanny It was. Any tech older than 2 years was obsolete in 1980-2007 era. Only after that Intel fed us with 4c/8t for a decade, and AMD could not keep up.
I don't necessarily think so, it's more that they're allowed to run free and wild lol But that looks very much like being better from the outside I bet there's a lot of very frustrated Intel engineers!
Intel doesn't have engineers. There's a lab somewhere in an Intel building where the world's best computer engineers can be found, but as far as Intel is concerned, it's Yuggoth.
Do you think this is similar to Intel's crystalwell "L4" cache from back then? That stuff was super interesting and I was sad to see Intel drop it, there were already tangible benefits in applications back when their "L4" cache was already almost surpassed by fast DDR4. I can't wait to see how this impacts applications.
Maybe in the near future this kind of tech will allow real advancements in the APU segment. The two biggest weaknesses have been the tiny CU count and of course limited memory bandwidth. Shrink down to 5nm, pack in an actual respectable GPU in there, slap a fat stack of cache on there, and back it all up with DDR5. Might finally become that low-end dGPU replacement they always promised to be but never measured up.
Lisa Su worked for freakin IBM on power chips. She's a freakin' genius. Now she's in charge of a bunch of smart people at AMD. We ain't seen nothing yet.
Also, I want to thank GN for covering this ASAP. I love seeing you being on top of things. This is important, and so congrats on remaining relevant, and cutting edge. Thanks for the mouse mat. You're welcome.
How does the V-cache hold up to liquid Nitrogen? Can it possibly debond? Bump the pot with the thermos enough to cause cracking issues under that much cold?
So they just announced 3D V-Cache for CPUs, but tbh I think that might not be the most exciting application of it. One might think to upcoming MCM architectures, with multiple dies being connected by a concurrent cache, and in fact, I could totally see a navi 31XTX die with 3D V-Cache for some truly mind boggling performance. But what about APUs? They're super heavily dependent on RAM speeds, so is there an argument you could accelerate them with a small, cheap quantity of V-Cache? Especially as chiplets, and GPU "tiling" become cheaper I wonder about either connecting CPU and GPU components with V-Cache, or just having it on the GPU segment, the very much under powered portion in today's APUs.
This is for milan-x, we won't see this actually on consumer CPUs till late 2022 at earliest, and AMD likely won't have the volume or price point to launch it on most Zen 4 chips. And for those wondering, Intels Lakefield did this already, but they never brought their 3D stacking to the mainstream
Dr. Ian Cutress from AnandTech asked AMD about this and it is confirmed. This 3D V-Cache will go into production later this year for Zen3 Ryzen based CPUs. Now, is it Ryzen Threadripper or AM4 Ryzen, no one knows but given how the prototype was a 5900X, one could argue that AMD would bring this to 5900X/5950X.
A 3D V cache 5950x would OBLITERATE alder lake even with DDR5 & PCIe on its side…. Anywhere from 15-25% IPC uplift, generational performance increase on the transition from 3 to 3+ Zen that with ddr4 & pcie4 blows alder lake away still…. That’s funny If that Intel processor ISNT 600$ like it was leaked to be priced at, there’s literally NO POINT in buying an Intel version of the Apple M1 chip…. Stupid
Having this level of cache bandwidth will help with core count. RAM bandwidth was the limiting factor for core count without going to 6 or 8 channel RAM. Even at 8 channel RAM Epic/ThreadRipper would be capped at about 64 cores. This will bring 32 or 64 cores to dual-channel RAM systems such as Ryzen.
By the point Ryzen has 64 cores, RAM channels will have been replaced with an HBM L4 cache + serialized RAM. Stacked L3 cache only helps speed up one part of the pipeline. It's still less than 200MB of cache, which is microscopic when compared to the 16-128GB of RAM that a Ryzen system can use. DDR5 will be as important to bandwidth starvation, as AMD will likely have to redesign the Infinity Fabric to keep up with 5000MT/s RAM.
AMDs been on a roll for the last few years with Ryzen. AMD was the to break the 1Ghz barrier, first consumer 64 bit CPUs, first to dual core, first to a chiplet design and now with 3D V-Cache. Intel has done absolutely sod all for the last 10yrs with really innovative game changing CPU features apart from drip feeding 4 core CPUs. Intel has great engineers but they just seem to always be held back until they have absolutely no choice but to compete.
Well, the consoles have less L3 than the desktops, I think the same as the APU but they do have massive memory bandwidth, still only a third of what they're talking about here.
FSR looks extremely primitive compared to even DLSS 1.0. DLSS 2.0 is in another league altogether. It still might evolve into something worthwhile, but what they showed looked like a blur filter, essentially. It was terrible.
So do we think that 3D V-cache is first going to be implemented in Ryzen 6000 series aka Zen 3+/4 or do we think it will be implemented in a Ryzen 5000 "XT" refresh for Zen 3? Also wouldn't a CPU with 3D V-cache run hotter than a CPU without one (assuming all else equal excluding the thinning) on account of worse thermal transfer to the cooler due to more material that has a worse thermal conductivity in addition to increased workload?
It wil run bother, so cloclspeeds will come down a little bit. That cache is also very expensive so cpus using this will be much more expensive than current zen3 cpus! So zen3 will remain to be in production. These will come above those. High above both on performance and the price!
According to AMD both the cache chip and the CCX have both been thinned, so their overall total thickness is the same as the original. it very well may be hotter, but i cant see it being by a order of magnitude.
You're going to trust AMD's numbers, after what we saw last time. Really? Both Intel and AMD lie at every new launch, and at every briefing, it's what they do.
this advancements in "on chip Memory" is really interesting and i thinks its really cool you guys cover this stuff. I cant wait what the future holds for this channel and the chip market
Yeah totally! Like how can you even live with something so old and useless. You HAVE to upgrade BEFORE the new products are even out yet. Otherwise you're just a total peasant. A loser. A nobody. You're defined by the things you have. Not what actual need you have for them....
@@andersjjensen It's not like it directly....it's normal that the developing progress is extremely fast and it's important to have innovations, but i thought the next upgrade would be on AM5 and AMD stated that the leak of new gen 5000series CPU's wasn't true and that they will have the same performance...I thought about waiting for the Threadripper because i need the performance for my work and more cache would have been nice on the 5950x since it's a nice performance uplift ...
I wonder what will be the cost implications of this. On the one hand, they can increase yields by orders of magnitude, but that's not as important today. On the other hand, it's still a 45% larger die for a 15% increase in gaming performance. And perfectly stacking two dies with microscopic copper bumps is no easy feat. I can see this making sense if they can offset some of the costs by reducing the process steps per die, especially that of the SRAM die. But I'm skeptical of that. Zen 4 might end up being pricey.
It's possible that the gains aren't near V caches full potiental since Zen 3 wasn't designed with it entirely in mind. Something like Zen 4 could see a much greater performance increase from V cache due to being design integrated from the get go. I doubt it'd be 45%+ for every application to offset the cost though.
Remember like 3 years ago when Intel was screwing with AMD about gluing processors together, now here we are 3 generations of that tech later and Intel is looking kinda stupid.
WHAT THE WORLD SHOULD UNDERSTAND HERE IS THAT IT MUST OTIMIZATOR ALL APS AN ALL OS TU A if we were a developer, we knew how to do something new and good, we are looking for software and applied good forces with old technologies and which site to use at maximum 64 bit and more ram then 8 or 16 gb ... and who knows how to access more more cores then 4 core and and a stable and guaranteed frequency without breaking the power outlet 3.0 ghz or 4.0 ghz max 4.5ghz ... but so far everyone had to learn from the bone the latest app maker 2 things !!! 1 64 bit technology 2 and multi core technology ... !!! things that need to optimize everything to since 2009 but they are lazy even now !!! like the rest of the technologiis I have not yet reached the maximum ..... SSE 2 OR SSE 3 technology YOU CAN LEARN to count CPU CORES ... and to get out in the tests properly scores or results !!! here will mean something until then everything is dust in face or and firecrackers or fireworks ... a fekorama .. test 3d mark 2017 live fekorama veri bad . forte coll technologies from AMD CHIR AND I UNDERSTAND THEM HAVE LOGIC ... SORRY THAT YOU DON'T KNOW HOW TO GET ANYONE TO USE IT ... PS 3 LOLO SYNDROME ... but I also hope to make a mistake and I or the world will prove that it is not like that .. who knows ..
Good! Amd makes competitors run for their money now and it is good. Intel cpus has improved durin last 5 years more that previous 10 all together! And it seems that Intel still need to improve… Good good! And amd making Nvidia Pascal gpus faster is good marketing act!
I'm sure the folks over at AMD have already considered this, but I wonder what the thermal implications are of stacking memory on top of the cores. I imagine keeping them cool and removing that heat energy through yet another material will become slightly more difficult. I know they mentioned good thermal conductivity, but adding vertical mass is going to make it hard to get rid of heat which is primarily going to be coming from the cores.
@@jaredgarbo3679 I understand that. And I just edited my original comment to reflect this. But even if you use the most thermally conductive material in the world, you're still adding mass in a vertical orientation. Getting the heat from the cores through the memory to the heat spreader is going to decrease thermal performance. It has to.
@@bryanbernheisel7441 obviously they have accounted in this. you have to remember that it should be on a new process node as well, which might balance it out and not be much of a big deal
@@DanafoxyVixen I don't doubt for a second that they accounted for this. But that doesn't mean that it won't have any impact either. With the way both CPUs and GPUs are more or less pushed to their limit right out of the box with aggressive boosting characteristics, this is going to reduce that limit, all else being equal.
Watch our OTHER recap of AMD’s Ryzen 5 5600G & R7 5700G: ruclips.net/video/rhKzZ8Knsus/видео.html
What if the maximum amount of l3 cache 640mb like CD-ROM
Where was Jensen?!
Su Bae > Jensen
And what's wrong with 14+++++++++++++ nm technology?!? Steve is a nanometer bigot! ;)
Can you add a link to the AMD video presentation here somewhere as well?
That improved cache is critical to help feed the increased number of cores AMD has coming...Competition is good!
And you all know how rarely Steve is impressed.
That alone lol.
It's impressive as hell. If they start to code software to take advantage of this huge L3 it'll make our current chips look like dual core by comparison. They'll lose so many clock cycles going to system RAM it would be obscene.
I've been looking forward to this for years but it's going to be a while before they code for this as mainstream.
yes... usually he rambles for 30 minutes then tells you to buy intel, he didn't do that this time
@@thehen101 he's done the opposite in every instance other than those desiring absolute maximum framerates for gaming, In which case yes: you buy intel, with supporting testing showing precisely why.
As a result, you're either too stupid to understand context, have too short an attention span to have actually watched the video, or are being completely disingenuous. All 3 are pretty bad.
The fact that would be the case and you posted that comment anyway? Even worse.
@@formdoggie5 the moment he said steve was rambling means he understood absolutely nothing.
@@clydesanchez5158 lol
. True story.
You have the best descriptive videos, period. My OCD does kick a bit when I see the curtain rod above your right shoulder, please move it up just a couple of inches, to remove the grey backdrop.
I'll be happy to see newer technologies being implemented.
Epyc with this could very well be insane and no slouch for desktop/workstations either. If this is as good as the last few years AMD knocking it out of the park once again.
Cant wait for Linus to watercool 3D cache...
two generations later: the RAM is in the CPU.
that would allow for many more cores, as it would ease giving the cores enough data to work on.
AMD was doing L3 cache back in the K6-3 days. If the motherboard provided proper support, the onboard cache got demoted to L3. Super Socket 7 motherboards could be had with up to 2MB of cache.
@Gämers Néxus I feel honored to be deemed worthy of trolling by Steve.
Wow, such a nice surprise from the Gamers Nexus team to cover this so quickly, thank you Steve, Patrick, Patrick, and Snowflake!!!
You forgot Patrick
@@plasticbleach4004 I did LOL
Tech Jesus is always here for us
snowflake is the mvp of the gamers nexus team
we all know its snowflake doing all the work smh
AMD flexing FSR on one one of Nvidia's own cards is awesome to see. The fact that AMD competes so well with Intel and Nvidia is astonishing, considering either of it's competitors are multiples larger than it.
Fact that Nvidia itself is not supporting that and previous gen cards
@@ghostriley22 this have always been nvidia sthick: proprietery Feature only on new generation with closed environnment to force an upgrade.
FSR quality is awful
@@dreamhackian4864 DLSS 1.0 was also not very reliable when it came.
Give it sometime it will get better
@@dreamhackian4864 My bad, I didn't realize you've been using it this whole time. Competition is what we need in this space, it'll take time to mature, just like every other new technology.
Imagine that, there are lightweight OS that can fit onto L3 cache alone.
It would be interesting to see a system boot without DRAM. Skipping memory training would mean faster booting, too!
my neighbor developed tiny core linux, an 11 MB gnu/linux OS
Monitor response is slower than the os can startup 😂
@@robertjif6337 Just because it fits in cache does not mean it boots from cache.
Still needs to load in from slow SSD. Though a 7GB/s read speed may help there.
@@jamegumb7298 that is current technology, soon it might not maybe
Goddamn, this is insanely fast, full credit to how on top of this you guys are.
IIRC one tweet said that reporters got a little heads up press pack, but the surprise was the 3D V-Cache announcement.
I'd expect nothing less from the man who makes a living of this type of information and practically lives at the office.
You used to see running a game off vram
Now see running a game off cache
Doom 1996 can already fit on cpu cache, now see doom 2016 run off cache
@@nicholasmitchell6025 Man, now I want to set up a cache disk and install doom on it. Is that theoretically possible?
I mean, with how big the current cache already is, a lot of retro games can already be ran on cache. Now the goal is to run an OS on that cache. Would be the most responsive experience ever
Actually, now that I thought about it, probably it wouldn't make that much of a big difference as we will be IO bounded for an OS very very quickly so it maybe won't be that much better.
@@HoshinoMirai It would need some tight hardware integration to avoid having to go through too much software
AMD then: moar cores
AMD now: moar cache
And Intel ++++++++++
Moar value!!
Not just cache, *_G A M E C A C H E_*
@@ayuchanayuko The better to RAGE MODE with, my dear!
AMD later: MOAR CORES, MOAR CACHE, MOAR GPU COMPUTE. 350W CHIP
Nice coverage Gamers Nexus, and this is just the start of that technology, expect 3D stacked CPU chiplets to follow. AMD's engineers have to be getting their tech from some downed alien ship or something like that :P
Sure seems that way, they're definitely pulling rabbits out of a hat at the very least.
Not very likely. They stacked a low-heat producing part ontop of a high-heat producing part.
Cache is not easily affected (much) by heat however.
Stacking core complexes just leads to increased production costs and internal heatspots.
@@kjeldschouten-lebbing6260 The important part in how they did it though was direct copper with no soldering, its all in the process, and what she highlighted is incredible and is certainly a technology first, stuff of fantasy made real.
@@kjeldschouten-lebbing6260 You are right, that's why if this is gonna happen it's gonna be server CPUs with low per cor power draw. I'd assume you could have a slightly underclocked epyc processor with 128 cores, which would be absurdly space efficient, and that's a very important factor considering how many servers are standing in expensive cities.
Bruh its all about what you can make, you dont have tech unless you can make it
RDNA2 coming to Exynos chips!
Yeah, Chromebooks and Galaxy Note 21 will probably be the first to see Exynos+Radeon.
@@robertstan298 Didn't Samsung can the note series? I heard they cancelled the Note and the Z line will replace it
Tesla is using RDNA 2 as well.
@@huleyn135 Quite a step up from that crappy Intel Atom infotainment system.
@@Jaker788 OMG Tesla uses a crappy ATOM CPU? What were they thinking using that horrible thing
You guys are always on it. Thank you for the coverage.
several minutes later:
_AMD replaces system memory with just cache_
edit: that's just an SOC with extra steps
(sorta)
I kinda want to see 32GB of HBM beside the CCDs. Who needs external DRAM sticks!?
@@asm_nop I'd prefer having the option to add more relatively easily.
What you're suggesting is not a good thing.
@@tigran914 He was partially joking.
@@tigran914 if its the max the chip generation can handle at roughly the same exact or lower cost (due to board partner mass order discounts), why? That'd just be silly.
It sounds like you haven't thought that one through all the way.
@@tigran914 They'd be stupid to not give you the option, but the fact is that it's where we're headed. It will essentially be a high-bandwidth L4 cache if you add more RAM.
3D Cache is amazing, but the FSR makes me get a FreeSync vibe. DLS may be a bit better, but FSR being open source and more liberal in access may give it an advantage if the quality and support can keep up.
They clearly doing same strategy as freesync. Offer/enable some features to competitor products (albeit basic one). To entrance / incentives the manufacturer / game developers to include them
@@winnieid2727 Basically, kicking the opponent's d@ck
Yeah, it might be a bit weaker, but it should be available for more titles in the end
Freesync is awesome.
@@xPandamon Also consoles and that should make us see it more in the future. Ofc if its really shit and they drop it, it won't be like that
Between this and DDR5, it sounds like AMD _really_ doesn't want Zen 4 to be data starved. Or in other words, they sound very confident in how fast Zen 4 can process instructions when well fed. 2022 is gonna be a good year for upgrades.
It will be if the GPU market ever returns.
And they want options for extra cache in EPYC
@@MundaneThingsBackwards I certainly hope so but as long as availability sucks we're looking at high prices.
Inflation + supply mismanagement will make it so that only select few get anything. No one will be able to upgrade. I used to be optimistic but it's just not realistic anymore.
@@MundaneThingsBackwards the shortage is literally everything semiconductor related. Ford is shutting down some of their plants because they cant get enough chips for their cars. Its probably not going to change until demand goes down or more chips are able to be produced
Starting up with the statement that it is "one of the most impressive we've seen in a long time", I'm going to read up on the docs for the V-Cache.
And that up to 192 MB L3 cache from the title alone is insane.
I don't read into this as wow 192Mb i see this as 96Mb per CCX witch means EPIC get's More EPIC that's a Crazy 96Mb X 8 = 768Mb that is going to be a tasty monster
really like my TR Pro's so I can't wait.
wish they just 3D stack HBM3 and give each chip a few gigs lol. i don't game much, i compile Linux kernel mods all day.
@@ZoeyR86 I've been saying the next step was going to be a huge L4 in the IO die, looks like I was pretty close to the mark.
@@ZoeyR86 then you'd have to fill the package with novec or something, otherwise there would be internal and hard to cool hotspots
@@ZoeyR86 nearly 1GB of cache... Jesus.
The tech in just assembling that vertical cache is hard to imagine. Crazy impressive.
How so? It looks as if they just added move memory to chip, and that is it.
@@_sky_3123 Intel is probably thinking the same thing
Not really a student I worked with that was doing nano tech did the EXACT thing as their final year project on their own.
@@mememachine5244 It's impressive when done on a nm level, otherwise yeah, it's just stacking vertically. Cooling will be interesting as well.
@@_sky_3123 Spoken like someone who has no inkling how this stuff actually works.
2:46 I cant hear anything other then "intel has had its own 3d die stacking approach, called BOB ROSS" 😂
No mistakes just Happy little accidents lol :)
@@iankovac1878 Beat me to it.
Omg once heard it can’t be unheard, why you do dis?!? Lol
Yes, the 1060 + FSR was a power move so I'm thinking AMD is pretty confident with the technology. Not showing a list of games that have it implemented is a little disconcerting but I think that the GTX crowd might be willing to wait a bit to get a free upgrade. I know my 8GB RX570 is amenable.
I've been wondering if with the close ties between Microsoft and AMD and the timing of Super Resolution going live just over a week after Microsoft's big E3 shindig that there might be some sort of correlation there. I've always thought that Flight Simulator coming to console would have to use super resolution. E3 might be a time for Microsoft to finally announce the console release date for Flight Sim, show it off, and show it using AMD's fidelity fx toolset including super resolution. I'm sure there are other games in Microsoft's back pocket that they'll show off using fidelity fx as well.
@@HalfUnder yeah I agree, they said they are working with 10 studios, if one of them is Microsoft, we can expect a lot from the E3 event
Given that FSR appears to be in a very early stage, I wouldn't be surprised if developers aren't 100% set to use it. Imagine AMD advertises that your game will feature FSR - basically to promote their own product - and in the end you have to drop it because it's not ready yet. Also, the 1060 demo looks quite blurry with FSR on.
@@ethanol89 I suppose that we will all find out the state of FSR on June 22 when it's released. You do realize that you are describing DLSS when it first came out. Except that it was only available to RTX cards. DLSS wasn't ready or implemented in that many games and was blurry. At least now a 1060 will have the opportunity for improved performance. FSR is open source, if AMD falls of the planet tomorrow I'm sure someone would take it on. I'm guessing adoption will drop like dominoes because if I was a AAA game developer or otherwise and wanted to be successful, FSR would give me a larger available market than usual.
@@ethanol89
I believe devs will be more eager to implement FSR than DLSS because of 2 reasons
1) it’s available on both Nvidia(even GTX GPUs) and AMD GPUs instead of DLSS (only available on RTX cards), looking at it from a dev perspective, if I have to choose between two technologies, I would choose the one which will be available for more People
2) it’s open source
First time I was this early I was in the hospital after the brakes failed on my AMD bike
Intel, driven by marketing division. AMD, driven by engineering division. Any questions? Awesome video thanks for being lightening fast.
AMD: HBM2 was good but noone bought it...
AMD: Awh fuck it, make it L3 cache.
No one bought it cuz it wasn't put on any products, except the ones no one wanted.
I personally expected every GPU to have HBM2 by now... But here we are with GDDDDDRR4256XYZ
@@GreatMCGamer HBM2 is highly sought after by miners after all, and Radeon VII seems to be the only thing they make that satisfies them
@@GreatMCGamer It's expensive, that's why it was barely used.
@@Mojave_Ranger_NCR
That is also true, but it is a price I would be willing to pay.
@@GreatMCGamer So am I lmao
Keep pumping them out Steve! We can't get enough!!!
I'm so early L3 cache is still like 16MB.
5900x has 64mb L3 cache I think
@@Burssty I was merely joking using time as a reference frame.
@@JosiahBradley im to old to remember that Quad once called a High end gaming cpu and you dont need more than four core, and we stuck a decade stagnation of intel 😂
Dude I still have 6MB cache.
@@GundulmuGaming my first CPU bought for gaming specifically had 256KB of L2, L3 wasn't really a thing yet and had 256MB of DDR. Thinking that future CPUs will have as much SRAM as my first computer did from DDR is a huge leap.
Jensen Huang: "To all my Pascal gamer friends, it is safe to upgrade now"
* AMD brings out FSR super resolution which supports even Nvidia GPUs *
Me with my 1080 Ti: No, I don't think I will. 🙃
not like we can since there isn't stock anyways.
*The Leather Jacket:* Upgrade now or I will spank you with my spatulas!!!
Dr Lisa Su is like the teacher at school we all wanted. Enthusiasm and accuracy in the information they present. :P
holy shit 192mb l3 cache? last time i was this early i had less RAM in my PC
64mb was a hella lot back when I started out. 128MB was performance level and VERY expensive. You could run a full linux OS on this cache alone!
But I can technically still flex like 128mb of l4 cache on y'all.
AMD is going to fight for the HPC market with this huge cache. It's still dominated by intel but cache is a crucial thing for scientific computing. I wonder how much slower this cache is (or how much faster compared to RAM).
2Tb + per second
Well, Dr. Lisa Su said it already, it supports more than 2TB/s of bandwith. Just to throw in some numbers, HBM2 is able to reach 256 GB/s memory bandwidth per package.
Pretty sure it's faster than their current L3 cache.
As stated, exceptionally fast.
From memory, the real issue is SRAM takes up a lot of transistors per bit, so it takes a lot of die space. Which is kind of solved by adding it from another wafer, less chance of broken memory. Still expensive per megabyte.
@@jamegumb7298 Xbox one 2013
Whenever you want to complain about AMD raising prices, remember how much they're actually advancing the industry. Intel reamed you with 4c8t forever. AMD is giving insane cache and inexpensive 8c16t APUs.
I'm definitely going to get the 5600g
@Patsk88 "Whenever you want to complain about AMD raising prices" Yeah AMD fanboy, corporates is just corporates. Its not like AMD making CPU to save PC world, to makes charity or whatever the bullshit it is, they are just still corporates who trying so hard to makes money.
The hypocrites of AMD fanboy like you need to be disappear, it just bullshit excuse. The reason AMD still improving their tech is because they haven't fully beat Intel like Intel does with Sandy Bridge, if AMD on level with Sandy Bridge success with such huge market even beat Intel market share then AMD will be just like Intel before.
@@runninginthe90s75 There's no fanboyism when someone states a fact.
Intel didn't push the industry forward when they were claiming that's all they've done. AMD actively tried to push the industry forward at all times instead of stifle their competition like their competitors did to them and many other companies over the *decades.*
If you knew the history of companies like Intel, NVIDIA, Apple, Microsoft, etc., you'd understand just how destructive things have been behind the scenes, and question why these companies were even allowed to continue operating in the market.
@@runninginthe90s75 Certainly, that's fanaticism, just trying to justify every single thing someone's doing by using any angle they can find to put them under a good light of perfection. I also suspect it's one of the cases where someone just values money over anything else and they even idolize it.
@@dounin8876 Agreed. A price increase is a price increase and price increases are still bad for the people making the purchase. Trying to change the angle to say something that's bad for the end user is a good thing is 100% fanaticism.
Also, you know, minor thing... AMD raised the prices BEFORE they (maybe) advanced the industry with this 3D v-cache (we don't know because it's not in a product yet). So that whole argument falls flat on its face anyway.
Huge innovations from AMD 💡, Intel needs to catch up 😀
Thanks to intel's foveros,amd can make 3d v cache :)
@@monkeslayer-km5ho It's from TSMC
@Sean Price no I mean the foveros
@@monkeslayer-km5ho It's different tech, not just in name, but in technical method. Foveros uses microbumps. TSMC's method uses TSVs, and is much better suited to what AMD is doing here.
@@adhahanif9792 Then AMD fanboy lords should thanks to TSMC not AMD. Also its pathetic to see how AMD fanboy praised too much about their 7nm CPU but they forgot the 7nm chip is made by TSMC not AMD, while AMD just design the architecture. Let AMD produce their Ryzen on GloFo then current Ryzen will be no where good as is it.
So this is how they're going to stay ahead of Intel even if Intel went 5nm.
Intel is working on something similar. But it seems AMD is ahead in the race.
Zen 4 is already going to be using the 5nm process IIRC, so it's 5mm and the stacking combined.
Pretty much. AMD will have to keep innovating beyond just die shrinks ahead of Intel. Intel has always been able to get way more performance out of their nodes than AMD.
@DOOM SLAYER Intel's 10nm is bad, and there is no news on 7nm Intel, and TSMC's 5nm was out at the start of this year.
@DOOM SLAYER I guess you just proved his point?
Hey, are we getting even more Infinity Cache on future AMD GPU's with this? Assuming the extra cost to stack the cache die on top isn't too much, they could add more Infinity Cache without taking up so much die space, right?
It's different arch between CPU and GPU. It's likely Infinity Fabric only suitable for GPU since it's using Stream unit inside it.
@@adhahanif9792 i would speculate the infinity cache on the 6xxx series is probably based on this tech to some degree.
They could probably adapt it but the interfaces are quite different. CPU has 2x 64bit memory channels, GPU has 8x 32bit memory channels. Everything is different but there's no reason they can't use the same concept.
@@glenwaldrop8166 You're comparing DRAM interface channels against the standard SRAM cache. The infinity L3 cache in in the middle and interfaces way different than DRAM, it's the same as a CPU in that regard
Infinite cache
I haven't watched you in a while. Nice new logo!
Where u been
@@gl4989 doing things.
Welcome back
@@purplepeak8575 smelly
The speed at which AMD is pumping out new technology is truly amazing!!!! I'm all for it .
2:37 - if people from 2 independent groups can effectively interact with each other better than people inside your single group, then you're doing smth horribly wrong.
it's just corporate structuring in a nutshell.. to many people in charge that don't need to that's passed off as "accountability" when really it's just lets see who back stabs the other first to kiss the CEO's ass.. if you need more than 1 person in charge of any one division then you have a problem.. i deal with it every day and it's fk'ing cancer.
The server crowd is going to love it when this is implemented on EPYC. Kind of surprised to not see it there first.
Server need something that has some test miles behind it. It will definitely come to servers eventually. Most likely very soon…
Desktop Ryzen seems to often be the testing grounds for a lot of AMD's tech. They know consumers that audience that buys DIY Ryzen are willing to put up with more BS than the average consumer.
I've been holding off for something big and this looks like it fits the bill. I'm a happy 1st gen Ryzen owner.
In the same boat. I hope ddr5 delivers
Man thats old by todays standards lol. Im sure youll have a blast with zen 3
@@Isasnotes 2017 was amazing for cpus and I am happy AMD came back
Still rocking i5 4460 waiting for ddr 5
7:10 Livestream comment: "UserBenchmark in shambles" LMAO
Hahahaha
Nah they will somehove make new test where Intel still win… even one test that does 99% weight in the test set ;)
Userbenchmark: AMD marketed itself to being better than Intel but Intel better okay?! It's all marketing!
Such an exciting time for CPU technology all around!
Seems like only AMD Is making breakthroughs
@@Datttsnake Intel is gonna have big little with Alder Lake. Unless AMD beats them to it, it's gonna be pretty cool
@@Eidolon2003 I'm not putting any money on that making a dent. Thats only going to improve effiency for laptops and reduce power, not make it faster. People have been saying every year for the last 6 that Intel will come out with some secret weapon and become king again and it just hasn't happened. They've been caught with their pants down and have been playing catch-up sense.
@@Datttsnake Reducing power draw is exactly what we need right now, with everything getting more and more power hungry every generation. It's getting out of hand to be honest. Even if it only makes sense in laptops, I think it's what x86 needs to stay competitive with ARM chips like Apple's M1 design in the laptop space.
@@Eidolon2003 I think it's interesting, and I do agree we need power to go down, but honestly I would say that needs to be more for GPUs as current GPUs are insane power hogs versus the 180wats of a high end CPU. I do agree we need to lower, so hopefully it takes off.
The vertical cache is probably the most interesting change in a long time, and I work in software and spend about 15-20 minutes for the initial compile before hot reloading kicks in, so definitely going to be interested in upgrading next year if the price isn't too out of whack.
It's 12am, I should be in bed, but here I am, watching Steve go all nerdy for AMD. Loving it!
Same my dude.. expect it's 3 am.
Its 4 am. Thought u wrote 12nm instead of 12am
Now you know what I usually have to duel with living in Asia, exciting news usually roll out at midnight, and live stream begins at working hours. But, exciting tech is always exciting, 24-7.
I read that as "12nm"... I need to sleep as well.
Damn Amd evolving more in 1 year than Intel did in the past 10 years
I haven't seen Steve smile in a tech piece for months... until now. This must really be as good as he is saying to get him all excited ^_^
Just got my signed blue ITX mousepad yesterday, and your coverage of Computex is just another example of why I was so happy to support this channel! Your consumer-focus is such a boon to this industry in keeping companies accountable.
Cant wait for the clickbait videos saying the new apus for 2025 are going to have 3d stacked gpu's on top of the cores, making apu's that are faster than a 3090!
2TB/s memory bandwidth! *
*8CU available.
You got it all wrong, a 256 000 core apu using -10w faster than 78 rtx 6990s.
@myname ismyname it'll need more memory bandwidth, I guess DDR5 would probably do the trick with a half a gig+ of L3.
@@glenwaldrop8166 ddr5 is not even near enough for that! Ddr5 is good move to right direction, but any low end gpu will have more memory bandwide that ddr5 can give apus!
@@haukionkannel a half a gig running at 2TB/s would be a nice boost. Look at what RDNA2 is doing with 128MB.
50GB/s won't be enough for a proper GPU but it'll do quite well for an iGPU.
Zen has always been ridiculously hungry for memory transfer bandwidth, so it does make sense they would do everything in their power to offset the delegation to main memory. Buildzoid has ranted in particular about this quite a lot, I wonder what he thinks about this move.
5:40 "Technical confidence packed into really dense paragraphs." So 3D V-Paragraphs? ;)
AMD was incredibly smart in appointing a former engineer as their CEO. Best decision they've ever made.
Dr. Su is amazing and should be considered a role model for young women all over the world!
Yup, Dr. Su is an inspiration to us all. She brought AMD back from the grave and keeps hitting homeruns every single year.
Intel, HP, Boing - all companies that got successful being ran by engineers - and look what happened when that changed :/
More cache is always more better, but clearly GN doesn’t need that, you’re FAST!
Oh wow! This may push me over the edge to retiring my aging Broadwell i7. A large L3 cache definitely has an impact on real world applications.
Sounds like Edram on Broadwell, to bad intel dropped that ball, just as they got greedy from a lack of innovation in core counts... AMD picked it up tho and good on them.
@@BroNapartay Its different, I had an i7-5775C in the past. That EDRAM appears in CPU-Z as L4 cache. It resides in a separate package on the same substrate as the core.
@@fleurdewin7958 And can still beat 7700k. Great CPU
@@fleurdewin7958 Yep, and L3 SRAM is much faster than EDRAM so I expect this to really kick ass
Gotta say this. I was more than impressed after watching AMD Computex event.
3D V-Cache was definitely the star of the show, really looking forward to seeing more information on it and eventually see it in action on released products.
192MB was the amount of RAM i had 15 years ago
30 years ago , we had hard disk with 40 MB.
192MiB in 2006?
Super old machine. 2GiB was standard.
@@Adam130694 It'd be about four or five years old at the time. Far from "super old".
@@TrueThanny It was.
Any tech older than 2 years was obsolete in 1980-2007 era.
Only after that Intel fed us with 4c/8t for a decade, and AMD could not keep up.
@@Adam130694 If it still does what you need it to its not technically obsolete.
I said once, I'll say it again: AMD has better engineers than Intel.
I don't necessarily think so, it's more that they're allowed to run free and wild lol
But that looks very much like being better from the outside
I bet there's a lot of very frustrated Intel engineers!
yeah because they have lisa su
@@mduckernz real artists ship - AMD has more effective engineers at this point; that's all that matters to me!
Intel doesn't have engineers. There's a lab somewhere in an Intel building where the world's best computer engineers can be found, but as far as Intel is concerned, it's Yuggoth.
This definitely explains why the leaked Renoir render looked so thick
Raphael? We already have renoir
@@WayStedYou Yeah, that other R :')
THICC AS A GOTDAMM BRICK
So Thicc, AMD produce 7meters CPU 😂
Do you think this is similar to Intel's crystalwell "L4" cache from back then? That stuff was super interesting and I was sad to see Intel drop it, there were already tangible benefits in applications back when their "L4" cache was already almost surpassed by fast DDR4. I can't wait to see how this impacts applications.
It was expensive, but yeah… the speed was there!
Maybe in the near future this kind of tech will allow real advancements in the APU segment. The two biggest weaknesses have been the tiny CU count and of course limited memory bandwidth.
Shrink down to 5nm, pack in an actual respectable GPU in there, slap a fat stack of cache on there, and back it all up with DDR5. Might finally become that low-end dGPU replacement they always promised to be but never measured up.
Fr could you imagine how fast a IGPU could be with all the new tech 👀
Lisa Su worked for freakin IBM on power chips. She's a freakin' genius. Now she's in charge of a bunch of smart people at AMD. We ain't seen nothing yet.
Also, I want to thank GN for covering this ASAP. I love seeing you being on top of things. This is important, and so congrats on remaining relevant, and cutting edge. Thanks for the mouse mat. You're welcome.
Im still waiting HBM2e integrated with the CPU.
Wow, this came out quick.
One notable omission in the GPU support list is the RX 400 series. Can you guys ask AMD about what are they going to do about it?
isn't rx500 same as rx400?
@@marceelino Yes. The RX 500 series is a refresh of the 400 series. 400 series cards can be flashed to 500 series equivalents with the right bios.
How does the V-cache hold up to liquid Nitrogen? Can it possibly debond? Bump the pot with the thermos enough to cause cracking issues under that much cold?
So they just announced 3D V-Cache for CPUs, but tbh I think that might not be the most exciting application of it. One might think to upcoming MCM architectures, with multiple dies being connected by a concurrent cache, and in fact, I could totally see a navi 31XTX die with 3D V-Cache for some truly mind boggling performance.
But what about APUs? They're super heavily dependent on RAM speeds, so is there an argument you could accelerate them with a small, cheap quantity of V-Cache? Especially as chiplets, and GPU "tiling" become cheaper I wonder about either connecting CPU and GPU components with V-Cache, or just having it on the GPU segment, the very much under powered portion in today's APUs.
192 MB of cache...LMAF I remember my first PC (a 386 system) having 8MB of system memory and think that was a lot.
8086 with 512k
@Rahq Vuth damn
Have 9900K, have good DDR4 rams, will upgrade to 3d-vcache ryzen9-5900x, I'll not buy Intel which forces me to buy 360mm aio...
This is for milan-x, we won't see this actually on consumer CPUs till late 2022 at earliest, and AMD likely won't have the volume or price point to launch it on most Zen 4 chips. And for those wondering, Intels Lakefield did this already, but they never brought their 3D stacking to the mainstream
Yeah i feel like its zen4 oriented
Dr. Ian Cutress from AnandTech asked AMD about this and it is confirmed. This 3D V-Cache will go into production later this year for Zen3 Ryzen based CPUs. Now, is it Ryzen Threadripper or AM4 Ryzen, no one knows but given how the prototype was a 5900X, one could argue that AMD would bring this to 5900X/5950X.
A 3D V cache 5950x would OBLITERATE alder lake even with DDR5 & PCIe on its side…. Anywhere from 15-25% IPC uplift, generational performance increase on the transition from 3 to 3+ Zen that with ddr4 & pcie4 blows alder lake away still…. That’s funny
If that Intel processor ISNT 600$ like it was leaked to be priced at, there’s literally NO POINT in buying an Intel version of the Apple M1 chip…. Stupid
Having this level of cache bandwidth will help with core count. RAM bandwidth was the limiting factor for core count without going to 6 or 8 channel RAM. Even at 8 channel RAM Epic/ThreadRipper would be capped at about 64 cores. This will bring 32 or 64 cores to dual-channel RAM systems such as Ryzen.
By the point Ryzen has 64 cores, RAM channels will have been replaced with an HBM L4 cache + serialized RAM.
Stacked L3 cache only helps speed up one part of the pipeline. It's still less than 200MB of cache, which is microscopic when compared to the 16-128GB of RAM that a Ryzen system can use.
DDR5 will be as important to bandwidth starvation, as AMD will likely have to redesign the Infinity Fabric to keep up with 5000MT/s RAM.
AMDs been on a roll for the last few years with Ryzen. AMD was the to break the 1Ghz barrier, first consumer 64 bit CPUs, first to dual core, first to a chiplet design and now with 3D V-Cache. Intel has done absolutely sod all for the last 10yrs with really innovative game changing CPU features apart from drip feeding 4 core CPUs. Intel has great engineers but they just seem to always be held back until they have absolutely no choice but to compete.
Is this AMD leveraging their console chip experience?
It definitely seems like it. This is making me even more interested to see what Microsoft shows off on the 13th.
Well, the consoles have less L3 than the desktops, I think the same as the APU but they do have massive memory bandwidth, still only a third of what they're talking about here.
@@glenwaldrop8166 I was thinking of the experience they have with SOC designs because they just made both of the next gen consoles.
@@bobby0081 they're monolithic chips though.
Dunno, interesting thought.
FSR looks extremely primitive compared to even DLSS 1.0. DLSS 2.0 is in another league altogether. It still might evolve into something worthwhile, but what they showed looked like a blur filter, essentially. It was terrible.
The day I see a “core starved” CPU I will cry
I doubt AMD is going to be the one to deliver that... they love lots and lots of cores :P
So do we think that 3D V-cache is first going to be implemented in Ryzen 6000 series aka Zen 3+/4 or do we think it will be implemented in a Ryzen 5000 "XT" refresh for Zen 3?
Also wouldn't a CPU with 3D V-cache run hotter than a CPU without one (assuming all else equal excluding the thinning) on account of worse thermal transfer to the cooler due to more material that has a worse thermal conductivity in addition to increased workload?
It wil run bother, so cloclspeeds will come down a little bit. That cache is also very expensive so cpus using this will be much more expensive than current zen3 cpus! So zen3 will remain to be in production. These will come above those. High above both on performance and the price!
According to AMD both the cache chip and the CCX have both been thinned, so their overall total thickness is the same as the original. it very well may be hotter, but i cant see it being by a order of magnitude.
Can't wait for AMD to start using the 4th dimension like the case manufacturers do.
Yes, pull power through a empty pin
You're going to trust AMD's numbers, after what we saw last time. Really? Both Intel and AMD lie at every new launch, and at every briefing, it's what they do.
That was a really awesome news to hear The copper bonding also allows rise in conductivity rate With very impressive results
Nvidia: Sorry GTX 1060 users, we will not be giving you DLSS. Now buy our new expensive shit.
AMD: Hold my silicon.
Wow...I've never seen steve looking so impressed
this advancements in "on chip Memory" is really interesting and i thinks its really cool you guys cover this stuff. I cant wait what the future holds for this channel and the chip market
Damn i feel like my 5950x is obsolete now 😭
It really, really, really isn't.
Yeah totally! Like how can you even live with something so old and useless. You HAVE to upgrade BEFORE the new products are even out yet. Otherwise you're just a total peasant. A loser. A nobody. You're defined by the things you have. Not what actual need you have for them....
@@andersjjensen It's not like it directly....it's normal that the developing progress is extremely fast and it's important to have innovations, but i thought the next upgrade would be on AM5 and AMD stated that the leak of new gen 5000series CPU's wasn't true and that they will have the same performance...I thought about waiting for the Threadripper because i need the performance for my work and more cache would have been nice on the 5950x since it's a nice performance uplift ...
Do you guys ever sleep?
I wonder what will be the cost implications of this. On the one hand, they can increase yields by orders of magnitude, but that's not as important today. On the other hand, it's still a 45% larger die for a 15% increase in gaming performance. And perfectly stacking two dies with microscopic copper bumps is no easy feat.
I can see this making sense if they can offset some of the costs by reducing the process steps per die, especially that of the SRAM die. But I'm skeptical of that. Zen 4 might end up being pricey.
It's possible that the gains aren't near V caches full potiental since Zen 3 wasn't designed with it entirely in mind. Something like Zen 4 could see a much greater performance increase from V cache due to being design integrated from the get go. I doubt it'd be 45%+ for every application to offset the cost though.
SO!?
The future is about AMD huh ?
Hopefully news about AMD is always good and not gimmick
And now over to Dr Ian Cutress, on his taste test of V-Cache.
forget sata ssd...
forget nvme ssd...
forget ram drives...
... all my homies use cashe drives
Do you think this will come in rembrandt? It could serve as infinity cache for both the cpu and gpu
If Factorio taught me anything it's that bottlenecks never die, they just pop up somewhere else.
3D V-cache must be inspired from Samsung 3D V-NAND
Remember like 3 years ago when Intel was screwing with AMD about gluing processors together, now here we are 3 generations of that tech later and Intel is looking kinda stupid.
WHAT THE WORLD SHOULD UNDERSTAND HERE IS THAT IT MUST OTIMIZATOR ALL APS AN ALL OS TU A
if we were a developer, we knew how to do something new and good, we are looking for software and applied good forces with old technologies and which site to use at maximum 64 bit and more ram then 8 or 16 gb ... and who knows how to access more more cores then 4 core and and a stable and guaranteed frequency without breaking the power outlet 3.0 ghz or 4.0 ghz max 4.5ghz ...
but so far everyone had to learn from the bone the latest app maker
2 things
!!! 1 64 bit technology 2 and multi core technology ... !!! things that need to optimize everything to since 2009 but they are lazy even now !!!
like the rest of the technologiis I have not yet reached the maximum ..... SSE 2 OR SSE 3 technology
YOU CAN LEARN to count CPU CORES ...
and to get out in the tests properly scores or results !!! here will mean something until then everything is dust in face or and firecrackers or fireworks ... a fekorama .. test 3d mark 2017 live fekorama veri bad .
forte coll technologies from AMD CHIR AND I UNDERSTAND THEM HAVE LOGIC ... SORRY THAT YOU DON'T KNOW HOW TO GET ANYONE TO USE IT ... PS 3 LOLO SYNDROME ...
but I also hope to make a mistake and I or the world will prove that it is not like that ..
who knows ..
what the hell are you rambling about
Ahh yes 192MB L3 cache on AMD Ryzen 7Meters processor 😂😂
AMD was the underdog for years and now they're crushing everything in sight 😆
Good! Amd makes competitors run for their money now and it is good. Intel cpus has improved durin last 5 years more that previous 10 all together! And it seems that Intel still need to improve… Good good! And amd making Nvidia Pascal gpus faster is good marketing act!
I'm sure the folks over at AMD have already considered this, but I wonder what the thermal implications are of stacking memory on top of the cores. I imagine keeping them cool and removing that heat energy through yet another material will become slightly more difficult. I know they mentioned good thermal conductivity, but adding vertical mass is going to make it hard to get rid of heat which is primarily going to be coming from the cores.
They are using copper which will improve conductivity.
@@jaredgarbo3679 I understand that. And I just edited my original comment to reflect this. But even if you use the most thermally conductive material in the world, you're still adding mass in a vertical orientation. Getting the heat from the cores through the memory to the heat spreader is going to decrease thermal performance. It has to.
@@bryanbernheisel7441 obviously they have accounted in this. you have to remember that it should be on a new process node as well, which might balance it out and not be much of a big deal
@@DanafoxyVixen I don't doubt for a second that they accounted for this. But that doesn't mean that it won't have any impact either. With the way both CPUs and GPUs are more or less pushed to their limit right out of the box with aggressive boosting characteristics, this is going to reduce that limit, all else being equal.
Steve looks real excited there, coool :)