BTW just a couple days ago AMD published a new patent on distributed rendering across multiple chiplets (not just 2..). Here if you're curious: www.freepatentsonline.com/20220207827.pdf
today everyone has been talking about AMD having tensor cores and using that matrix multiplier thing (i really dont know much about all of this, but im learning... slowly) and that they might start a hardware based FSR system not unlike dlss..... so doesnt this contradict what your saying? - I'm just trying to learn :)
What about the idea of Infinity Cache stacked on top of the IO Die? If it were 64mb sections, then the same cache die could be reused for Zen chips and for RDNA chips and they'd have someplace to use the faulty chips as 32mb dies in lower-end units. Because they sit on the less power-hungry IO die, heat transfer shouldn't slow things down as much.
Design costs don't double just because you design a different piece of silicon. Common elements would have been figured out already and a different die design made from those common elements would be significantly less cost than the initial design. AMD custom designs silicon based on already developed building blocks, it would be unfeasible for AMD to offer such services if the costs for each die were anywhere near the costs of developing the common elements.
Precisely. Still, you would have some overhead, the additional mask costs, and somewhat less flexibility in production of chiplet based assemblies to address the multitude of use cases.
It's so frustrating watching a video with so many points based off misconceptions that could have been clarified in a discussion with others in the community. I have no idea why Coreteks insulates himself from #silicongang discussions. He'd avoid reaching these embarrassing conclusions so often.
Each design still needs masks and tape outs, test; but overall is it easier to debug a smaller chiplet that's reused or a large monolithic design? Even in the 80's cell libraries were well established and bigger building blocks were being created. Just like software reuse is desirable to increase reliability and lower costs.
i think its easy to look back from where we are and think yeah had to be done like this...... however, do you think many people back in the 80s etc, had the vision to see what their designs wouldd be building blocks for and how theyd be used 40 years later? i highly doubt it. loook at how old x86 is........ i doubt they even envisioned SoCs at the time and so the GPU in a SOC likely originally was nothing like a discrete gpu.
@@gamingtemplar9893 are you slow? handheld gaming accounts for more than half of all gaming. AMD can make a fuckin killing if they remain at the top of the mobile chip space
Just because Nvidia is paying more money to TSMC doesn't mean they get more allocation. From Semiaccurate, "Due to Nvidia’s opportunistic switching between TSMC and Samsung, Nvidia doesn’t receive the same terms. Nvidia wants a lot of N5 capacity and 2.5D packaging capabilities next year and beyond as they prep launch for Hopper datacenter GPUs, Lovelace gaming GPUs, and continue to gain share in networking versus Broadcom. To secure this supply, Nvidia is prepaying billions to TSMC, something the previous 3 customers have not had to deal with. A big portion of this is also due to Nvidia’s growth at TSMC due to switching away from Samsung." In fact, I would bet that AMD will get substantially more wafers than Nvidia despite paying substantially less. AMD is a long-term TSMC customer and works with TSMC on a much closer level than Nvidia. "AMD and MediaTek are the two other preferred TSMC customers. They are mostly exclusive on the leading edge and therefore they do not deal with having to prepay large amounts for capacity. They get most the leading-edge wafer capacity they need, and the issues for their respective supply chain hinges on other aspects. For AMD, these supply issues deal more with substrates and externally at server and notebook ODMs for components such as BMC’s and WiFi. For MediaTek, these supply issues deal more with PMIC and RFFE. As such, both firms’ pre-payment for supply agreements with TSMC are close to non-existent. In Q3 2021, AMD only notched up to $355M of pre-paid long-term supply agreements despite being amid the largest semiconductor supply crunch in decades. Most of this prepayment is dedicated to substrates."
@@NanoHorizon China’s military isn’t capable of hitting Taiwan. Their military has more corruption than Russia and more political/central control. They also lack the sea lift capacity to attack anyone in the region. That, with Zero COVID policy and other economic issues, China is on the verge of large economic collapse worse than the US in 2008.
@@NanoHorizon Depends when does this invasion happen. If it happens this or next year it would hit sub 7nm production hard as only Samsung in South-Korea and Intel in the west are capable of manufacturing 7nm class products. And no we wont go back to 1997 but supply gets cut and chips get more expensive as everyone will be knocking on Samsung's and Intel's doors. However if the invasion happens in 2025 or later it will matter much less as by that point TSMC will have several fabs in USA and Europe producing chips.
I dont believe that Nvidia has to pay a higher price than AMD. The difference is that Nvidia has to pay the money 2 year in advance so TSMC can increase capacity. Also keep in mind: The Datacenter department of Nvidia nearly needs as many wafers as AMDs CPU+GPU+Console departments combined. They pay more because they also need a lot more wafers.
I like that you always bring something completely new to the discussion, not just repeating the same leaks over nad over again. I dont really care what it end up being but i like these kind of speculations. Either way appreciate different and interesting video.
Haven’t watched the video yet but I echo the same thoughts. Every video is good… I never miss one. Voice and delivery is also top notch for night time listening. Whereas other content providers I might just read the title and thumbnail because just rehash of same old leaks but coroteks is several levels above on a quality level
+1 here, great work! the way you present is great, too, with graphics supplementing what you're talking about and not just another guy in front of a camera talking on and on
I don't think so, I think the lowest they go will be the navi 33 which might go down to the 7600xt, unless they really cut down the die. Lower than that will be APUs. Though there's talk that they might just cut the price of last gen way down, like the 6700xt - 6600 and use them to fill the entry level. The used market should explode with good deals by then though.
Last I heard Navi 31 was 6mcd dies, and one gcd. I think maybe when he said no memory chiplets, he was talking about not having hmb rather than memory control dies.
What are your thoughts on a pc cyclical slowdown incoming? I mean, it normally happens, but I kind of feel we're in a golden age for pc computing. A new CPU upgrade is still needed(or wanted I should say) by enthusiasts. Gpus are moving fast and we have a lot of people who missed last gen upgrade cycle including myself, who want a new gpu. Steam decks, and portable gaming pics are a brand new segment that's opening up. Little Phoenix will take it to a new height. Big Phoenix looks like a perfect steam box. Ps5 pro and Xbox pro. I think the pc industry doesn't have a cyclical slowdown ahead, but maybe I'm completely stupid and it does.
The world governments giving our economies the shafts, I think is the real question. That is, how much will the collapse of Western and Eastern economies affect overall demand? Full on people-starving depressions haven't cut down on industrial needs, or innovation, by all that much, and in some markets event spur innovation more than good times. But, I think how that happens, and how we recover, over the next decade or so, will decide that.
@@laurelsporter GDP is down 1.5% (annualized rate, so it's really down like 0.4%) in Q1 in the United States. Sure, it's not growth, but isn't doom and gloom unwarranted? Are you saying economic collapse is a future thing? What kind of time frame are we talking, short medium or long term?
2:30 While i dont dissagree that the I/O die is probably going to be 6/7nm, that doesnt mean they're not going to have all the chiplets on the same node, what really hurt the 3000x/5000x series was the inability to run fast RAM because of that 12nm I/O die. Unlike the 7nm I/O on the the 4000g/5000g series(up to 2500Mhz UCLK/MCLK/FCLK in my testing) They may wish to have everything on the same 5nm process to compete with Apple in efficiency(seems like the 12nm IO die just uses a constnatly 15w at idle where as the 5700g can idle at 4w), support the highest speed RAM, and maybe support the next generation of PCIe or higher speed I/O You might ask why then even have chiplets, well it still improves yeilds and allows symetrical dual CCD processors(instead of one chiplet having IO and the other being just a chiplet) The other possibility is that there *MAY* be some truth behind some version of Zen4 coming to AM4, but more likely, they likely want to transision away from having monolothic processors in laptops, the 5700G/5800u is pretty close to the size of the 6600xt after all, making it in theory more expensive to produce than the 5950x, while costing just a bit more than a 5600x. By having all processors be chiplet based, AMD can mix and match everything, and by having it all on the same node, no one part will be held back by frequency/power consumption, allowing them all to work on laptop, desktop, or embedded
I know Sam.. we did some mtn bike racing a decade or so ago. I remember giving him shit for how power hungry and not competitive amd was. He was at intel prior. I think it got under his skin. I’ll have to meet up and check in with him. He’s a super nice guy. And a beast on a bike. 🍻🍻. Sam is responsible for dozens of patens as well. A total Superman. Glad to have met such an inspirational person
Quote from Toms Hardware: "We wanted to clarify the definition of a "chiplet approach" for GPUs, just to be sure AMD wasn't talking about HBM again. Naffziger confirmed that there would indeed be separate chiplets (not memory chips), though he didn't nail down exactly how AMD will do the split." What Naffziger said is that they are not putting VRAM on the package like it is done with HBM (see Vega). He did not say anything about using chiplets with infinity cache (SRAM) or memory interface. Also doubling of FP32 per WGP will not double die size. My estimate of total die size is around 700mm² for full Navi31. My estimate: 1 GCD with compute + IO 6 MCDs containing 64MB Infinity cache + 64Bit memory interface each sitting either around the GCD or stacked on top (like VCache). Tbh I lean more towards 3D stacked as this already is proven to supply high bandwidth with 5800X3D and would explain why AMD does not push Navi31 as high as Nvidia.
My paper napkin math and my experience says 64MB cache mcd is not going cut it unless they have an incredibly effective texture compression algorithm. Sram will be on io die.
@@LangweiligxD135 use your mind.. There are 52 million , millionaires right now.. There is only 19 million coins When BTC becomes as used as USD or even close... then these people will not even be able to have 1 coun each.. meaning each coin will be worth millions of current USD.
this design would be amazing. just think about how ryzen cpu's run when a core hits a current temp then it downclocks slighty but moves to another core to keep the clock speed up. if we think about RDNA 3 would be if this is the design they do for RDNA 3 to were the gpu cores move around at a clock speed when it hits a high temp it moves to another gpu core this would be very interesting if they do go this route. this would make it to where Graphics would be like how powerful ryzen is this would make it to where software would have to change for overclocking and other tweaks
As both Nvidia 3090Ti and the rumors about Ada Lovelace indicates, power consumtion spirals out of control if they don't improve perf/W. If RDNA3 indeed is this power efficient and is chiplet-based then AMD will probably win the performance crown for the halo/enthusiast segment this generation. CDNA2 already is a chiplet design similar to first generation Threadripper so it's a matter of making chiplets & drivers work with game workloads. This was a problem ten years ago with SLI and CF but it might be different this time.
That is not true because Multi chip still has latency problems and we do not know how well it will scale across games and applications. Top Navi 31 is also single chip but with chiplets off of the GPU die.
Another possibility is multiple IO dies. On the same package, for graphics and parallel compute, NUMA would not be a difficult hurdle for scheduling, *especially* hardware scheduling.
With CPUs, the I/O die can use the previous process because there is really no need for more advanced process. But GPUs need much bigger bandwidth + resizable BAR eating into the bandwidth budget, so it's possible AMD use the N5 process as well for GPU IOdie
Honestly I struggle to decide what gpu I will buy and its a good thing, competition is great. My only concern is if AMD gpus will work in my professional apps or not.
AMD seems to have been working on their software suite as of late. We'll have to see when third-party benchmarks come out. As you say, true competition is good for the consumer.
I might assume they won’t use a giant interposer. That’s expensive when they already use the equivalent with the elevated fanout bridge like MI250. Much smaller dies…
What do you figure the minimum viable number of compute dies is for RDNA3? We know they'll be making a monolithic APU with RDNA3 IP, we suspect that will be on 6nm, why not use those same IP blocks to build a monolithic RDNA3 chip for low and midrange laptops + low range desktops? Something like one or two dies of compute on the same chip as the IO and fixed function media hardware, smaller memory controller and bus... They keep low margin parts on a marginally older process, maybe make up for that process by reduced communications overhead via smaller busses and no interconnects, and save on packaging costs too.
16:52 Bold. So you're saying N31 will have all the chiplets, N32 fewer of them and N33 fewer still. 18:43 That Naffziger quote is really because so much of the revenue is driven by the enthusiast level. If revenue was driven by, say, the entry level, there wouldn't be a need to push power because entry level is about delivering the most performance per dollar.
I wonder how they'll handle the chiplet height differences. The die tickness has to be extremely tightly controlled or this will be a cooling nightmare. Unless they'll give it an IHS (soldered).
My next GPU will be AMD no matter if they are good or not. Not because of and fanboyism but because of better Linux support. Even if Nvidia sorta open sourced it's drivers recently. They reluctantless of supporting Wayland, not providing DLSS and a bunch of other other features. Nvidia is mean. But once again I will go a tier down. First I had an R9 290X, now a 1070Ti the next will be 6600, 7600 or 7500.
yes was clear like water for me for computer unit been divided. when saw all those big number of isothéric leak :P and hope manage to break down rdna3 for us enthousiasme about achitecture also fun for me to see how communication of data is spread across all those new change and improvement of pipeline of data.Looking forward to your next master piece this one been solid and well assisted to make it comprehensive well done.
Have YOu informed yourself about the shape of the supposed IO-Die? Tom (MLID) said after he saw your first supposed package last year, that a silicon engineer told him, it would be very tideous to make a piece of silicon, that has such an elongated shape. This doesn't neccessarily contradict your 6+1 die theroy, just the shape and the dies might be very different.
Cool analysis but there doesn't appear to be enough room on that sliver of an IO die for the large amount of infinity cache that's going to be on navi 31.
well it's been talked about for a while but i guess this is the triumphant return of multi-GPU cards. it's not like they traditionally were and this time around we won't have to worry about crossfire or SLI but IMO it counts.
Wouldn’t Amd need to totally solved the chiplet latency issue for gpu MCM design? That is the big deal I think if they do go with so many chiplets. They will have maximum yield right out of the gate.
For GPUs, it is not a big problem. The scheduler knows the relative latency differences, and can just fill up higher latency queues slightly deeper. The memory amounts and addresses are known. The totality of code to run is generally known. GPUS code does very little random memory access, ideally none, and eschews real branches. While there are certainly devils in the details, it's primarily a load-balancing problem. On CPUs, near future memory and execution needs are unknown, which makesvNUMA working halfway decent nearly a miracle.
Why would they be "memory" chips and not Called "cache" chips. If the extra chips are Infinity cache style of thing it could be useful without being called a memory die.
So will that mean we'll get 6700xt kinda performance in rx6600 like 100w gaming tdp for Navi33(7600 non XT?) Pretty sure the APU will be a bigleap aswell especially with that 6WGP
If they have gotten power down, does this make 3d more likely? A planned convergence of new technologies? And changing a socket on a CPU is major for AMD. On a GPU trivial
I understand what you're saying regarding power scaling. But amd could have a chip that scales better. Apple pulled it off. Highly unlikely at the power ranges you're talking about but one can dream
i hope AMD comes with some information about release date and performance. i rly wanna buy an AMD card because i dont like how nvidia treated us. but DLSS and raytracing are rly nice features you dont wanna miss today. but i also dont want to wait anymore any longer. My 2070(no super) has to work pretty hard at 1440/144hz. i also dont want to leave performance behind when i build my new system for next years to come. i made myseld a deadline for the end of the year to build my new system. AMD, please win this competition
actually I do not believe AMD is cutting (planned) wafer Supply at all they are cutting six nanometer down considerably but that doesn't mean that that wasn't already planned they won't need as much six nanometer and seven nanometer going forward. not I suspect that rumor will be proven false for amd
It smells like fake news. Chinese military plans would be a tightly controlled secret, just like those of any country. Did China make some new demand or ultimatum of Taiwan?
@@SuperFlamethrower dude, do you even realize what's going on in the world? US fucks with China extremely agressively since Blinken and Biden entered the office. Taiwan is asking for a seat in QUAD and US is playing with it's stance on "One China" policy. US is pumping Taiwan with guns just like it did with Ukraine recently. There's gonna be another proxy war where Taiwan and Australia are gonna be steamrolled by China just Like Ukraine and EU are being steamrolled by Russia. The actual risk is while US is far too scared to engage in an open conflict with Russia, it seems like they're willing to go to actual war with China. The war US will loose miserably. And that's the risk, cuz when these crazy neocon-neolib imperial-maniac club will be punched and owned by China they may actually press the button and use nukes.
You doubt that AMD will gain more marketshare, while NVIDIA announced to order less wafers because the mining boom is over ... and AMD is ordering more wafers. Meaning they expect more products to be sold. It really is simple
AMD produces.a.HUGE lineup of different types of die, with CPUs, APUs for laptop, PC and game consoles, and GPUs for desktop and laptop, and AMD has been making a killing in server CPUs. Forgot to add, GPUs for server too. They didn't have enough die for current Gen products so no they're not going to cut back on wafers for next generation products. They did cut back on wafers for current Gen products. Their advantage is most.their lines of products are on the.same node. I.think in desktop GPUs AMD gains market share even if Nvidia is better because AMD WILL have better RT performance and they'll be priced better than Nvidia.
I'd personally want to see better 4K performance. It's the way of the future and therefore AMD should totally try to optimize the tech for higher resolution. Ray-tracing? eh... sure that's great too but never used it with RX6800 gpu.
$1299 is optimistic. If AMD manages to outperform Nvidia, they also need to outprice them. People who buy GPUs at that level don't go for the cheaper, more sensible option.
Well, i dont believe that they will sell primarly 7800XT/7900XT , only high End Gamers are willing to spent 1000 Dollar and above for a GPU , they can sell way more for 500 Dollar . At Launch , sure , but at the end two cards sold with 2 GPU chiplets for 350-400 Dollar are the same 4 Chiplets for an 7800 XT for 800 Dollar and 2 Chiplets more and you have the 7900 XT for 1000-1100 Dollar . The only thing you need more with smaller cards is the I/O Die , of course , the AIBs have to spent more , because the need 2 or 3 GPU Boards but that would not be AMDs Problem , they could save Money at the VRMs , Cooler and Memory with the smaller cards
Relatively, there is a shortage of DDR5, most of the chips get stockpiled for Server use on DDR5 registered ECC memory, a “tiny” portion is allocated to consumers for regular unbuffered non-ECC memory. A consequence of this situation is that almost a year after Alder Lake’s introduction DDR5 unbuffered ECC is still nowhere to be seen. This is the type of memory you would need for current Intel socket 1700 W680 motherboards or upcoming AM5 Ryzen motherboards to be able to use ECC.
If they do, they'll be a lot closer, and hopefully the better overall performance will make up any gap. They're supposed to make a huge leap with rdna 3 though, like if a card is 2x better at raster, it should be more than 3x better at rt. Where RTX 40 is supposed to be 2.5x better rt for 2x raster. At least that's what I heard from various leakers. I bet any Nvidia sponsored titles will find a way to gimp AMD though. DLSS 3 is supposed to have a better denoising algorithm, so they'll probably get a good boost there too.
Yes AMD will lose in raytracing however the gap will not be massive like last generation. The main thing that will separate RNDA3 from Lovelace will be features on the professional level.
It will be close but Nvidia will likely win again. AMD is looking at a 3x boost over current gen. As they were half the speed of Nvidia this gen, that means they will only be 50% faster than current Nvidia gen..
I think 1299$ for the 7900 XT is optimistic... like it's double the performance of the 6900 XT and only 29% more expensive that's a huuge uplift in value, but if that's the case then 7800 XT will be 699-799$ and only 5-10% slower than 7900 XT so that'll be the sweet spot but i really do hope it turns out to be true because i do think that it's the best case scnario we can hope for.
Crypto is tanking, economy is tanking. Even less people are going to buy GPUs at inflated prices. Yeah they can price it $3000 and end up discounting it to 1k or so. Remember, doesn't mean a product is 2x+++ better than its predecessor would translate a direct proportionate increase in price.
@@nomad9098 I know but usally price/performance ratio gets better by smaller and smaller ammounts in generations but i'm not so up to date on cypto and the economy but well i thought the economy is back to going up coz we can deal with COVID now and crypto.. usally when it tanks, it still retains a higher value than before it spikes, so is it an exception now?
@@nomad9098 There is a lot of room to cut prices because gross margin is very high. It costs about $150 to produce that GPU, but that's not counting any design costs, which are significant.
The way I see it Lisa Chen saw how AMD had so many product offerings.... similar building blocks, but completely seperate and isolated teams. Such that the GPU on an SoC back before her time looked nothing like the GPU core functions and each with its own team. She then took after Henry Ford and said..... nah, 1 GPU, 1 CPU, etc. and we chop it up into their own silicon dies and we stitch the dies together to make radically different products, cut down on the number of teams, or make bigger teams to ideally make a better product. Brought the assembly line mentality to the silcon industry. she then did a mic drop and walked out of that meeting knowing she had the biggest dic in that room. so take a gpu and stitch it with boats of scalable cores and vram..... you get radeon..... hoook it up to a cpu, you get soc.... and so on
No RDNA is not on N4. TSMC has been working to improve the HPC performance of ALL its advanced nodes. The problem for TSMC and the companies who use them vs. Intel is clock speed performance. At some point in the near future Intel plans once again to be a main fab for other chipmakers and HPC is where they have an advantage IF they are on the same node as TSMC at any given moment. Intel 7 for instance scales very well for clock speed vs. power consumption all the way up to about 5.5GHz. TSMC N7 on the other hand struggles to hit that speed. I'm sure AMD and TSMC have had a few conversations on this topic and TSMC saw the writing on the wall. I remember TSMC talking about this topic maybe about 18 months ago. In fact that's when I knew Zen4 would be able to clock higher than Zen3. How much? We're about to find out. I don't think AMD would put its caching scheme on another die. They would want that as close to core as possible and probably their raster cores.
Intel 7 is comparable to TSMC N3, not N7. Also what I said was that in addition to having it on the graphics dies there could be shared LLC on the I/O die, I didn't say it would be exclusive.
@@Coreteks we're all good except the first sentence. Intel 7 was in fact Intel's 10nm node and is STILL called 10nm by some people although Intel refers to it now as Intel 7 and if one of their engineers called it 10nm they'd probably be talking to the big guy. Intel renamed their nodes about 1 year ago?? I'm on a tablet and it's hard to go searching for data. They renamed their node to be in parity to what TSMC and Samsung are calling their nodes , BASED ON transistor density. So no, Intel 7 is a renamed 10nm node that has a transistor density similar, not exactly like TSMC N7. It's a little better than N7 for density. Consequently this is why Intel renamed what was GOING to be their 7nm node, to Intel 4. It's based on the density of TSMC's naming scheme since that's Intel's main competitor for HPC fabrication. If you feel this is inaccurate then please provide a link to a density comparison that shows differently or some other justification as to how you could possibly say Intel 7 compares to anything other than TSMC N7, although to be fair to Intel you'd now say it compares to N7P. Maybe even Now N6P if there exists the node. As a side note and once again I'm on a tablet and I really don't like using one, that older Infinity fabric AMD was using seems to me like it would be too slow for this type of graphics compute. The way you have laid out your diagram for a possible chip layout seems like it would require a different connection scheme, almost similar to Intel's tile connection which is parallel instead of serial. I HOPE this is the case, otherwise I see issues with how fast this can possibly be. Or better yet different companies including AMD and Intel are working together to produce a common interconnect and the way that has been shown puts one die next to another which should be parallel data transfer not serial. Any news on this?
RDNA 3 + Ryzen 4 gaming laptops being released this/next year are going to be insane. For reference, the rx6800m eats 145w of power and is within 10% of a 6700xt. The 5900hx gets close to a 5800x. And this laptop gets 10hr battery life for a 1080p panel. And costs $1400ish. So just imagine a next gen rx7600m + ryzen 5 7600h at $1000 packing 90-95% of 6900xt performance at 1080p, with 6-7hr battery life with just 60-70whr battery, will just kill nvidia. That is, if AMD isn't a moron this time. Like pricing the 7600m more than a 4060m while having similar performance.
Are there even enoough RDNA2+Ryzen 3 laptops out there at the moment? AFAIK, the only "mainline" laptop that's gonna release with that combo is the not yet released Lenovo Legion 7. The other one I know is the Asus Zephyrus G14 but it turned out to be a kinda hot mess. Can we really be excited about next gen AMD laptops? When would they even release, if they do?
@@frieren848 Have you not seen the rog strix amd advantage? The g14 is fine, it doesn't run hot. Also, 12th gen and ryzen 6000 cost too much on laptops right now. You can expect zen 4 laptops by around dec.
@@siyzerix hmm I didn't know there was a new an updated AMD Rog Strix 2022 model, i will have to check it out I know that Zen 4 is imminent this year, but aren't we talking about RDNA3 here? And by RDNA3 I mean the RX7000s or whatever the next gen laptop cards are gonna be called. I'm way more interested and excited for the GPU jump compared to the CPU jump because current CPUs are already powerful enough for laptops AND hot enough to be intrusive with battery and thermals.
@@frieren848 No, there's no new rx6800m rog strix laptop. I was referring to the 5900hx + 6800m strix as a example of why all amd laptops are great. That thing has a great battery life, as in ultrabook territory. So just imagine ryzen 7000 + rx7000 mobile series battery life, even on smaller batteries. So all AMD laptops should be able to undercut nvidia + intel. If AMD doesn't bring in genuine alternatives at all price points, then I've had enough of their incompetence. The rx6800 on desktops could've easily been put in laptops as a rx6800m. I suspect that was likely the plan. But nvidia wasn't competitive enough so AMD just put in a rx6700xt in to laptops, called it the rx6800m because that alone is enough to compete with nvidia.
@@siyzerix The new Legion 7s are supposedly gonna have a 6900HX+RX6850M XT option so I hope that one delivers! Should be better than the 6800M for sure. I don't think we're gonna see a big leap from that combo in a long time because AMD will focus on their Zen 4 APUs first and foremost (so better CPU but weaker GPU obviously), I'm not used to their roadmap but their laptop RDNA3 discrete GPUs are probably a looong time away from now.
late september is here and not yet able to buy it :) hope they are available soon for a good price vs Nvidia since the prices at nvidia right now are a joke
Another note, Nvidia is ALSO going to be using TSMC this time around. If they're on the same node AMD gets no advantage this time around vs. last gen. They would have to get almost all their advantage based on this chiplet design or the absence of other functionality vs. Nvidia such as less powerful AI, RT cores, etc... In other words I'm a little skeptical of that comment from person from AMD saying the competition will have to push power a "lot" higher to get the same performance. Geez I'm pretty sure that's why Nvidia moved back to TSMC. Higher clocks with less power consumption which I don't think Samsung die is good at.
@@superkoopatrooper4879 So you're one of the people that's got caught up into all the rumor hype. At MOST Nvidia has a single halo product that consumes 550 - 600W. The problem with rumors is the can become fixated on a SINGLE data point. Nvidia for instance designs a connector that can handle a certain power delivery and all of a sudden OMG ALL OF NVIDIA GPU's REQUIRE A NEW PSU!!!!!! No, Nvidia will ALSO get more perf/watt just like their last gen products did. In fact for what Ampere delivers in raster PLUS RT performance they're power consumption is not much off AMD considering AMD couldn't touch Nvidia in RT performance, like no contest. Nvidia got more of a boost in perf/watt than AMD moving to current gen products. People somehow forget this, and for whatever reason think Nvidia wants you to have to buy a new PSU. BS. Most Nvidia GPUs will be not much off AMD's for next gen or they'd lose sales.
Nvidia has a heck of a lot of extra transistors. Things like tensor cores eat up a lot of transistors. You need to be a lot more aggressive with power gating. By building smaller chiplettes and emphasizing more efficient memory access, and does indeed have an efficiency edge. But it's like racing a Ferrari against a Viper. Brute force versus a high revver. Yes and is more than certainly looking to at least double their performance. And Nvidia can do this but at extreme heat power and cost.
@@donh8833 AMD doesn't compete in RT performance so OF COURSE Nvidia has to use more transistors. Their image enhancing technology is also better and requires more transistors. SO what?? AMD is trying to catch up to Nvidia and will add more transistors to do exactly the same thing. The point being skipped and this is usually because Nvidia guards information regarding their upcoming products a lot better than Intel or AMD is Nvidia gets the SAME perf/watt benefit AMD does by moving to a better node and in fact it could be MORE pronounced for Nvidia because they are coming from Samsung 8nm which isn't as good as TSMC N7, so if anything Nvidia has the bigger gain from the node switch. So no, I don't agree and once again too many people got hooked into the rumor of high power consumption for Nvidia vs. AMD. Sure Nvidia wants to make a halo product that beats AMD in every regard other than power consumption but the rest of their product stack is going to get a better perf/watt uplift than AMD similar products when AMD is trying to catch up to Nvidia for RT performance and image enhancing combined with a speed improvement (FSR). AMD has to increase transistors to do that. So sorry.
@@superkoopatrooper4879 Both my replies to you I said the same thing which is I could see a SINGLE halo product pushing voltages that high but I don't see it being 600W. That's me personally based on past rumor and then what ends up being reality. I may have worded the comment in the two replies slightly different but the intent was the same. To me it now seems like you're looking for a way to come at me instead of dealing with the topic so before I get to the point of wanting to call you an ass or something else we'll just stop here. Figure out how to converse without resorting to looking for some way to go after the other person dude.
@@citadelbarker6106 I'm not going to get into it. But it's obvious you have way too much internally emotionally invested in Nvidia winning. NVIDIA will still win on rt and maybe marginally on upscaled image quality. (They will lose that battle the way they lost adaptive sync/gsync). But I think amd is going to win perf/watt and 4k rasterrization.
Bro, I love your channel, but you really need to take punctuation breaks. Really. Pause after a comma and stop after a period. It is so incredibly hard to follow for someone with ADHD or other neurodiverse conditions. Hope you take the comment in the right spirit. cheers.
The analysis / prediction was entertaining and interesting as always, even if it won't turn out to be true, the only issue I had was with the statement at @20:02 that 6950XT is 2% better than 6900XT which is complete BS! It's about 10% better and it has better perf/watt ratio than the 3090 Ti does, not to mention the perf/watt ratio from 6900XT to 6950XT is better in every way than from 3090 to 3090 Ti.
With the state of the gaming industry, and the world- why should I care about this technology anymore? because I don't. The gaming industry is a joke, and so are our economies. Until that changes I don't see any point in paying attention.
@@DaystromDataConcepts Right- that just adds one more layer to why not to care. I can afford all of this stuff, easily. I just don't know why I'd buy it. I could go out and spend 10 grand on a PC right now but why. I built my 3950x system anticipating GTA6- which didn't come. Starfield is delayed indefinitely- those are the last two reasons I even own a gaming PC. I might as well sell my system at this point.
@NANOHORIZON I hear you. I'm hoping Bethesda will surprise us and Starfield is actually good. Skyrim in space at 4K60 would prob make me want to upgrade (because I already have an OLED, otherwise buy an OLED before upgrading GPU!)
There's a lot to unpack here, almost all of it your incorrectness @Coreteks. I would like to help you: The interposer or IO die will be "on a node" - those two constructs are not actually able to be considered similar at all. "nVidia only does GPUs" - demonstrably wrong! nVidia acquired Mellanox and this is the basis of their Bluefield/DPU product, not to mention their Tegra (heard of the Nintendo Switch?) and automotive lines. "Infinity Fabric Controller" - Infinity Fabric is an evolution of HyperTransport, itself an evolution of the EV6 protocol used by the Alpha. @7:32 - you're stating that an IO die and interposer are equivalent. This is completely incorrect - an interposer provides connectivity, not functionality. @8:55 - you are speculating as to the reason for the temperatures seen on the 5800X3D - it's not universally applicable. Other leakers have indicated inverse cache stacking for future products, further detracting from your speculation. @9:07 - "Memory transistors aren't scaling anymore" - again, demonstrably wrong given information published by Samsung, Intel and TSMC. @9:53 - You cannot infer anything based on your attempt at analysis. You do not account for architectural improvements or other factors at all. @10:22 - "Which maxes that 222 millimetres squared" - the unit is "square millimetres" - please at least get the units right. @10:41 - Again with the "millimetres squared" - completely incorrect. @12:01 - Again confusing interposers and IO chiplets. Happy to help you out.
Jesus man, even if you do disagree with the man, there are ways of expressing it without being an absolute dick about it. Until someone has the actual product, most of this remains speculation and educated guessing. At least he made a far more coherent video than most other leakers; let’s be appreciative of that.
no matter what the design or issues or challenges are AMD wins and can do anything they have designed the best "glued cpus" in life as called by intel kicking intels ass so..gpu will be something great in the next round as rdna2 gpus have also beaten nvidia in fps and uses less power cant wait for rdna3 so it seems they may have one memory chip instead of the usaul many to make 8 or 10 or 12 or 16 or 24 gigs alltogether this will kill latency issues..which is great
no one cares anymore my man companies have chip shortage and they selling stuff well people have money shortage and they can't afford stuff like old days
8:40 2.5D stacking? For all the ASMR sexy narration you do, that was a mind boggling thing to say. Like jeez Rick, read your script and think about what you’re saying. 2.5D is a wonderful idea on a two-dimensional screen, but it ain’t how this physical world works. Dare I say that was stupid?
Another video of coreteks not knowing why there's a deEssing option in video/audio editing software that gets rid of ear piercing: sh, ch, th sounds in narration.
BTW just a couple days ago AMD published a new patent on distributed rendering across multiple chiplets (not just 2..). Here if you're curious: www.freepatentsonline.com/20220207827.pdf
you should pin this comment. it is getting lost on the comment section.
today everyone has been talking about AMD having tensor cores and using that matrix multiplier thing (i really dont know much about all of this, but im learning... slowly) and that they might start a hardware based FSR system not unlike dlss..... so doesnt this contradict what your saying? - I'm just trying to learn :)
@@Nobe_Oddy User are committed to NVIDIA because of using NVIDIA only user software
What about the idea of Infinity Cache stacked on top of the IO Die? If it were 64mb sections, then the same cache die could be reused for Zen chips and for RDNA chips and they'd have someplace to use the faulty chips as 32mb dies in lower-end units. Because they sit on the less power-hungry IO die, heat transfer shouldn't slow things down as much.
Design costs don't double just because you design a different piece of silicon. Common elements would have been figured out already and a different die design made from those common elements would be significantly less cost than the initial design.
AMD custom designs silicon based on already developed building blocks, it would be unfeasible for AMD to offer such services if the costs for each die were anywhere near the costs of developing the common elements.
Precisely. Still, you would have some overhead, the additional mask costs, and somewhat less flexibility in production of chiplet based assemblies to address the multitude of use cases.
Same with the software support, which accounts for over $100B of the figure shown.
It's so frustrating watching a video with so many points based off misconceptions that could have been clarified in a discussion with others in the community. I have no idea why Coreteks insulates himself from #silicongang discussions. He'd avoid reaching these embarrassing conclusions so often.
Each design still needs masks and tape outs, test; but overall is it easier to debug a smaller chiplet that's reused or a large monolithic design?
Even in the 80's cell libraries were well established and bigger building blocks were being created.
Just like software reuse is desirable to increase reliability and lower costs.
i think its easy to look back from where we are and think yeah had to be done like this...... however, do you think many people back in the 80s etc, had the vision to see what their designs wouldd be building blocks for and how theyd be used 40 years later? i highly doubt it. loook at how old x86 is........ i doubt they even envisioned SoCs at the time and so the GPU in a SOC likely originally was nothing like a discrete gpu.
RDNA3 APUs are going to be awesome for future handheld gaming.
Also Laptops this time round.
@@gamingtemplar9893 are you slow? handheld gaming accounts for more than half of all gaming. AMD can make a fuckin killing if they remain at the top of the mobile chip space
Just because Nvidia is paying more money to TSMC doesn't mean they get more allocation. From Semiaccurate,
"Due to Nvidia’s opportunistic switching between TSMC and Samsung, Nvidia doesn’t receive the same terms. Nvidia wants a lot of N5 capacity and 2.5D packaging capabilities next year and beyond as they prep launch for Hopper datacenter GPUs, Lovelace gaming GPUs, and continue to gain share in networking versus Broadcom. To secure this supply, Nvidia is prepaying billions to TSMC, something the previous 3 customers have not had to deal with. A big portion of this is also due to Nvidia’s growth at TSMC due to switching away from Samsung."
In fact, I would bet that AMD will get substantially more wafers than Nvidia despite paying substantially less. AMD is a long-term TSMC customer and works with TSMC on a much closer level than Nvidia.
"AMD and MediaTek are the two other preferred TSMC customers. They are mostly exclusive on the leading edge and therefore they do not deal with having to prepay large amounts for capacity. They get most the leading-edge wafer capacity they need, and the issues for their respective supply chain hinges on other aspects.
For AMD, these supply issues deal more with substrates and externally at server and notebook ODMs for components such as BMC’s and WiFi. For MediaTek, these supply issues deal more with PMIC and RFFE. As such, both firms’ pre-payment for supply agreements with TSMC are close to non-existent. In Q3 2021, AMD only notched up to $355M of pre-paid long-term supply agreements despite being amid the largest semiconductor supply crunch in decades. Most of this prepayment is dedicated to substrates."
All of this means nothing the moment China hits Taiwan.
The moment that happens we're all going back to 1997 technologically. For about 5-10 years.
@@NanoHorizon You...do know that most tech doesnt need 3 NM sillicone, right ?
@@NanoHorizon China’s military isn’t capable of hitting Taiwan. Their military has more corruption than Russia and more political/central control. They also lack the sea lift capacity to attack anyone in the region. That, with Zero COVID policy and other economic issues, China is on the verge of large economic collapse worse than the US in 2008.
@@NanoHorizon Depends when does this invasion happen. If it happens this or next year it would hit sub 7nm production hard as only Samsung in South-Korea and Intel in the west are capable of manufacturing 7nm class products. And no we wont go back to 1997 but supply gets cut and chips get more expensive as everyone will be knocking on Samsung's and Intel's doors. However if the invasion happens in 2025 or later it will matter much less as by that point TSMC will have several fabs in USA and Europe producing chips.
I dont believe that Nvidia has to pay a higher price than AMD. The difference is that Nvidia has to pay the money 2 year in advance so TSMC can increase capacity.
Also keep in mind: The Datacenter department of Nvidia nearly needs as many wafers as AMDs CPU+GPU+Console departments combined. They pay more because they also need a lot more wafers.
I like that you always bring something completely new to the discussion, not just repeating the same leaks over nad over again. I dont really care what it end up being but i like these kind of speculations. Either way appreciate different and interesting video.
What’s your minimum specification?
Haven’t watched the video yet but I echo the same thoughts. Every video is good… I never miss one. Voice and delivery is also top notch for night time listening.
Whereas other content providers I might just read the title and thumbnail because just rehash of same old leaks but coroteks is several levels above on a quality level
+1 here, great work! the way you present is great, too, with graphics supplementing what you're talking about and not just another guy in front of a camera talking on and on
Will they do anything for the 250 USD price point? or just APUs for the masses now?
I don't think so, I think the lowest they go will be the navi 33 which might go down to the 7600xt, unless they really cut down the die. Lower than that will be APUs. Though there's talk that they might just cut the price of last gen way down, like the 6700xt - 6600 and use them to fill the entry level. The used market should explode with good deals by then though.
The 6600 should drop to that price soon, or you buy a used mining 3060. There won't be new cards in that price segment this year.
An Intel GPU if they price it appropriately.
Last I heard Navi 31 was 6mcd dies, and one gcd. I think maybe when he said no memory chiplets, he was talking about not having hmb rather than memory control dies.
What are your thoughts on a pc cyclical slowdown incoming?
I mean, it normally happens, but I kind of feel we're in a golden age for pc computing. A new CPU upgrade is still needed(or wanted I should say) by enthusiasts. Gpus are moving fast and we have a lot of people who missed last gen upgrade cycle including myself, who want a new gpu. Steam decks, and portable gaming pics are a brand new segment that's opening up. Little Phoenix will take it to a new height. Big Phoenix looks like a perfect steam box. Ps5 pro and Xbox pro. I think the pc industry doesn't have a cyclical slowdown ahead, but maybe I'm completely stupid and it does.
The world governments giving our economies the shafts, I think is the real question. That is, how much will the collapse of Western and Eastern economies affect overall demand? Full on people-starving depressions haven't cut down on industrial needs, or innovation, by all that much, and in some markets event spur innovation more than good times. But, I think how that happens, and how we recover, over the next decade or so, will decide that.
@@laurelsporter GDP is down 1.5% (annualized rate, so it's really down like 0.4%) in Q1 in the United States. Sure, it's not growth, but isn't doom and gloom unwarranted?
Are you saying economic collapse is a future thing? What kind of time frame are we talking, short medium or long term?
2:30 While i dont dissagree that the I/O die is probably going to be 6/7nm, that doesnt mean they're not going to have all the chiplets on the same node, what really hurt the 3000x/5000x series was the inability to run fast RAM because of that 12nm I/O die. Unlike the 7nm I/O on the the 4000g/5000g series(up to 2500Mhz UCLK/MCLK/FCLK in my testing) They may wish to have everything on the same 5nm process to compete with Apple in efficiency(seems like the 12nm IO die just uses a constnatly 15w at idle where as the 5700g can idle at 4w), support the highest speed RAM, and maybe support the next generation of PCIe or higher speed I/O
You might ask why then even have chiplets, well it still improves yeilds and allows symetrical dual CCD processors(instead of one chiplet having IO and the other being just a chiplet)
The other possibility is that there *MAY* be some truth behind some version of Zen4 coming to AM4, but more likely, they likely want to transision away from having monolothic processors in laptops, the 5700G/5800u is pretty close to the size of the 6600xt after all, making it in theory more expensive to produce than the 5950x, while costing just a bit more than a 5600x. By having all processors be chiplet based, AMD can mix and match everything, and by having it all on the same node, no one part will be held back by frequency/power consumption, allowing them all to work on laptop, desktop, or embedded
I know Sam.. we did some mtn bike racing a decade or so ago. I remember giving him shit for how power hungry and not competitive amd was. He was at intel prior. I think it got under his skin. I’ll have to meet up and check in with him. He’s a super nice guy. And a beast on a bike. 🍻🍻. Sam is responsible for dozens of patens as well. A total Superman. Glad to have met such an inspirational person
Quote from Toms Hardware: "We wanted to clarify the definition of a "chiplet approach" for GPUs, just to be sure AMD wasn't talking about HBM again. Naffziger confirmed that there would indeed be separate chiplets (not memory chips), though he didn't nail down exactly how AMD will do the split."
What Naffziger said is that they are not putting VRAM on the package like it is done with HBM (see Vega).
He did not say anything about using chiplets with infinity cache (SRAM) or memory interface.
Also doubling of FP32 per WGP will not double die size. My estimate of total die size is around 700mm² for full Navi31.
My estimate:
1 GCD with compute + IO
6 MCDs containing 64MB Infinity cache + 64Bit memory interface each sitting either around the GCD or stacked on top (like VCache).
Tbh I lean more towards 3D stacked as this already is proven to supply high bandwidth with 5800X3D and would explain why AMD does not push Navi31 as high as Nvidia.
My paper napkin math and my experience says 64MB cache mcd is not going cut it unless they have an incredibly effective texture compression algorithm. Sram will be on io die.
Jensen has now US$5 B worth of wafer allocation and a dying mining card market... i bet he is regretting his decision.
What if I told you Bitcoin will be worth more than a million dollars inside of three years.
Would that change your diagnosis?
@@NanoHorizon three years? nah. five years? yes
@@scoringdigitsson.5194 Lol not even that, I bet it’ll max out at 170,000
@@NanoHorizon What use case do you see to justify that valuation?
@@LangweiligxD135 use your mind..
There are 52 million , millionaires right now..
There is only 19 million coins
When BTC becomes as used as USD or even close... then these people will not even be able to have 1 coun each.. meaning each coin will be worth millions of current USD.
Wasn't the prevailing rumor 1 compute die and 6 cache chiplets?
if you find a way in mathematics that the numbers do not interact between two functions, use that frequency to form base level code for interconnects.
I love to see Jensen feeling the pressure.
this design would be amazing. just think about how ryzen cpu's run when a core hits a current temp then it downclocks slighty but moves to another core to keep the clock speed up. if we think about RDNA 3 would be if this is the design they do for RDNA 3 to were the gpu cores move around at a clock speed when it hits a high temp it moves to another gpu core this would be very interesting if they do go this route. this would make it to where Graphics would be like how powerful ryzen is this would make it to where software would have to change for overclocking and other tweaks
As both Nvidia 3090Ti and the rumors about Ada Lovelace indicates, power consumtion spirals out of control if they don't improve perf/W. If RDNA3 indeed is this power efficient and is chiplet-based then AMD will probably win the performance crown for the halo/enthusiast segment this generation. CDNA2 already is a chiplet design similar to first generation Threadripper so it's a matter of making chiplets & drivers work with game workloads. This was a problem ten years ago with SLI and CF but it might be different this time.
They have the money and dont care about price
That is not true because Multi chip still has latency problems and we do not know how well it will scale across games and applications. Top Navi 31 is also single chip but with chiplets off of the GPU die.
Another possibility is multiple IO dies. On the same package, for graphics and parallel compute, NUMA would not be a difficult hurdle for scheduling, *especially* hardware scheduling.
i am waiting for 7800U with 780M igpu twice as fast as 680M for handheld PCs. RDNA3 will do wonders at 35W TDP.
With CPUs, the I/O die can use the previous process because there is really no need for more advanced process. But GPUs need much bigger bandwidth + resizable BAR eating into the bandwidth budget, so it's possible AMD use the N5 process as well for GPU IOdie
I'm still waiting for 32gb HBM2e memory as a cache L4 incoporated with the CPU chip.
Honestly I struggle to decide what gpu I will buy and its a good thing, competition is great. My only concern is if AMD gpus will work in my professional apps or not.
AMD seems to have been working on their software suite as of late. We'll have to see when third-party benchmarks come out.
As you say, true competition is good for the consumer.
Your videos are fantastic, you deserve way more subs
É sempre um prazer ver a notificação do RUclips dizendo que tem vídeo novo do seu canal.
What about the new code for Matrix related things? Could that be AMD's tensor core equivalent?
They should make a GPU chiplet small enough to sit beside a Zen chiplet.
The chiplets being on different nodes is less wasteful, but it complicates everything.
I might assume they won’t use a giant interposer. That’s expensive when they already use the equivalent with the elevated fanout bridge like MI250. Much smaller dies…
What do you figure the minimum viable number of compute dies is for RDNA3? We know they'll be making a monolithic APU with RDNA3 IP, we suspect that will be on 6nm, why not use those same IP blocks to build a monolithic RDNA3 chip for low and midrange laptops + low range desktops? Something like one or two dies of compute on the same chip as the IO and fixed function media hardware, smaller memory controller and bus... They keep low margin parts on a marginally older process, maybe make up for that process by reduced communications overhead via smaller busses and no interconnects, and save on packaging costs too.
16:52 Bold. So you're saying N31 will have all the chiplets, N32 fewer of them and N33 fewer still.
18:43 That Naffziger quote is really because so much of the revenue is driven by the enthusiast level. If revenue was driven by, say, the entry level, there wouldn't be a need to push power because entry level is about delivering the most performance per dollar.
Waiting for rdna 2 apu still but i wil go for rdna 3 destop apu cant wait 😛
where does the sponsor get their keys from?
I wonder how they'll handle the chiplet height differences. The die tickness has to be extremely tightly controlled or this will be a cooling nightmare. Unless they'll give it an IHS (soldered).
Every wafer has to be extremely tightly controlled but AMD do use solder to connect the dies with the IHS
Building an arcade machine I'll be using Ampere for that but the new desktop is getting RDNA3 and ZEN4
There are also active interposers. I think if this generation doesn't have them, they are coming
My next GPU will be AMD no matter if they are good or not. Not because of and fanboyism but because of better Linux support.
Even if Nvidia sorta open sourced it's drivers recently. They reluctantless of supporting Wayland, not providing DLSS and a bunch of other other features.
Nvidia is mean.
But once again I will go a tier down. First I had an R9 290X, now a 1070Ti the next will be 6600, 7600 or 7500.
I couldn't even play some games properly with nvidia drivers on pop os. I had to install windows on another drive just to play a game.
We need a new up to date analysis please on the GPU front. Thanks.
Interesting, either way the MCs will be on the chiplets as fan out of the memory IO would be impossible in the mock up shown, and would save on power
yes was clear like water for me for computer unit been divided.
when saw all those big number of isothéric leak :P and hope manage to break down rdna3 for us enthousiasme about achitecture also fun for me to see how communication of data is spread across all those new change and improvement of pipeline of data.Looking forward to your next master piece this one been solid and well assisted to make it comprehensive well done.
Have YOu informed yourself about the shape of the supposed IO-Die? Tom (MLID) said after he saw your first supposed package last year, that a silicon engineer told him, it would be very tideous to make a piece of silicon, that has such an elongated shape. This doesn't neccessarily contradict your 6+1 die theroy, just the shape and the dies might be very different.
Cool analysis but there doesn't appear to be enough room on that sliver of an IO die for the large amount of infinity cache that's going to be on navi 31.
LLC would be mostly on the graphics dies, the I/O die could have a small pool of shared LLC but not the bulk of it.
well it's been talked about for a while but i guess this is the triumphant return of multi-GPU cards. it's not like they traditionally were and this time around we won't have to worry about crossfire or SLI but IMO it counts.
3Dfx was 20 years ahead of their time.
Wouldn’t Amd need to totally solved the chiplet latency issue for gpu MCM design? That is the big deal I think if they do go with so many chiplets. They will have maximum yield right out of the gate.
For GPUs, it is not a big problem. The scheduler knows the relative latency differences, and can just fill up higher latency queues slightly deeper. The memory amounts and addresses are known. The totality of code to run is generally known. GPUS code does very little random memory access, ideally none, and eschews real branches. While there are certainly devils in the details, it's primarily a load-balancing problem.
On CPUs, near future memory and execution needs are unknown, which makesvNUMA working halfway decent nearly a miracle.
Why would they be "memory" chips and not Called "cache" chips. If the extra chips are Infinity cache style of thing it could be useful without being called a memory die.
Good background music, made it yourself?
I wonder what is the limit for scaling the production volumes of the chip fabs. Is it cost or time or both to make the machines?
Great work. I think you're pretty spot on on all of this.
So will that mean we'll get 6700xt kinda performance in rx6600 like 100w gaming tdp for Navi33(7600 non XT?)
Pretty sure the APU will be a bigleap aswell especially with that 6WGP
Are there any more news for Phoenix point?, i want dragon range to be all in one package, no extra rdna3 gpu
If they have gotten power down, does this make 3d more likely? A planned convergence of new technologies? And changing a socket on a CPU is major for AMD. On a GPU trivial
Rdna 4 is probably already in the works.
These are things we play games with. I think pricing has missed that.
I understand what you're saying regarding power scaling. But amd could have a chip that scales better. Apple pulled it off. Highly unlikely at the power ranges you're talking about but one can dream
i hope AMD comes with some information about release date and performance.
i rly wanna buy an AMD card because i dont like how nvidia treated us.
but DLSS and raytracing are rly nice features you dont wanna miss today.
but i also dont want to wait anymore any longer. My 2070(no super) has to work pretty hard at 1440/144hz.
i also dont want to leave performance behind when i build my new system for next years to come.
i made myseld a deadline for the end of the year to build my new system.
AMD, please win this competition
Both amd and Nvidia are trying to cut chip allocation for upcoming gpus.
actually I do not believe AMD is cutting (planned) wafer Supply at all they are cutting six nanometer down considerably but that doesn't mean that that wasn't already planned they won't need as much six nanometer and seven nanometer going forward. not I suspect that rumor will be proven false for amd
I think you'll be very lucky to get one of these if the Taiwan operation starts in fall.
It smells like fake news. Chinese military plans would be a tightly controlled secret, just like those of any country.
Did China make some new demand or ultimatum of Taiwan?
@@SuperFlamethrower dude, do you even realize what's going on in the world?
US fucks with China extremely agressively since Blinken and Biden entered the office.
Taiwan is asking for a seat in QUAD and US is playing with it's stance on "One China" policy. US is pumping Taiwan with guns just like it did with Ukraine recently. There's gonna be another proxy war where Taiwan and Australia are gonna be steamrolled by China just Like Ukraine and EU are being steamrolled by Russia.
The actual risk is while US is far too scared to engage in an open conflict with Russia, it seems like they're willing to go to actual war with China. The war US will loose miserably. And that's the risk, cuz when these crazy neocon-neolib imperial-maniac club will be punched and owned by China they may actually press the button and use nukes.
You doubt that AMD will gain more marketshare, while NVIDIA announced to order less wafers because the mining boom is over ... and AMD is ordering more wafers. Meaning they expect more products to be sold. It really is simple
AMD produces.a.HUGE lineup of different types of die, with CPUs, APUs for laptop, PC and game consoles, and GPUs for desktop and laptop, and AMD has been making a killing in server CPUs. Forgot to add, GPUs for server too. They didn't have enough die for current Gen products so no they're not going to cut back on wafers for next generation products. They did cut back on wafers for current Gen products. Their advantage is most.their lines of products are on the.same node.
I.think in desktop GPUs AMD gains market share even if Nvidia is better because AMD WILL have better RT performance and they'll be priced better than Nvidia.
I'd personally want to see better 4K performance. It's the way of the future and therefore AMD should totally try to optimize the tech for higher resolution. Ray-tracing? eh... sure that's great too but never used it with RX6800 gpu.
Hey @Coreteks, Coreteks... 4K IS A THING IN 2022 YOU KNOW! 😒
$1299 is optimistic. If AMD manages to outperform Nvidia, they also need to outprice them. People who buy GPUs at that level don't go for the cheaper, more sensible option.
Agree
@qwagor I wouldn't bet on AMD outperforming NV, don't believe the hype
22:15 This is false, other cards (even a Vega56) can also run ROCm, just without official support.
Well, i dont believe that they will sell primarly 7800XT/7900XT , only high End Gamers are willing to spent 1000 Dollar and above for a GPU , they can sell way more for 500 Dollar . At Launch , sure , but at the end two cards sold with 2 GPU chiplets for 350-400 Dollar are the same 4 Chiplets for an 7800 XT for 800 Dollar and 2 Chiplets more and you have the 7900 XT for 1000-1100 Dollar . The only thing you need more with smaller cards is the I/O Die , of course , the AIBs have to spent more , because the need 2 or 3 GPU Boards but that would not be AMDs Problem , they could save Money at the VRMs , Cooler and Memory with the smaller cards
Disruptive ludens copy?
Thank you for a great video
Any more rumors about that ominous AM4 Zen 4 APU that’s suspected to be released due to the lack of DDR5 stock?
There is no lack of DDR5
Relatively, there is a shortage of DDR5, most of the chips get stockpiled for Server use on DDR5 registered ECC memory, a “tiny” portion is allocated to consumers for regular unbuffered non-ECC memory. A consequence of this situation is that almost a year after Alder Lake’s introduction DDR5 unbuffered ECC is still nowhere to be seen. This is the type of memory you would need for current Intel socket 1700 W680 motherboards or upcoming AM5 Ryzen motherboards to be able to use ECC.
Do you think AMD will lose to Nvidia in Ray Tracing again? Since they're one generation behind on RT cores.
If they do, they'll be a lot closer, and hopefully the better overall performance will make up any gap. They're supposed to make a huge leap with rdna 3 though, like if a card is 2x better at raster, it should be more than 3x better at rt. Where RTX 40 is supposed to be 2.5x better rt for 2x raster. At least that's what I heard from various leakers. I bet any Nvidia sponsored titles will find a way to gimp AMD though. DLSS 3 is supposed to have a better denoising algorithm, so they'll probably get a good boost there too.
Yes AMD will lose in raytracing however the gap will not be massive like last generation. The main thing that will separate RNDA3 from Lovelace will be features on the professional level.
It will be close but Nvidia will likely win again. AMD is looking at a 3x boost over current gen. As they were half the speed of Nvidia this gen, that means they will only be 50% faster than current Nvidia gen..
They will but at least not as big of a gap as rDNA2 and rtx 3000 series.
I think 1299$ for the 7900 XT is optimistic... like it's double the performance of the 6900 XT and only 29% more expensive that's a huuge uplift in value, but if that's the case then 7800 XT will be 699-799$ and only 5-10% slower than 7900 XT so that'll be the sweet spot but i really do hope it turns out to be true because i do think that it's the best case scnario we can hope for.
Crypto is tanking, economy is tanking. Even less people are going to buy GPUs at inflated prices. Yeah they can price it $3000 and end up discounting it to 1k or so. Remember, doesn't mean a product is 2x+++ better than its predecessor would translate a direct proportionate increase in price.
@@nomad9098 I know but usally price/performance ratio gets better by smaller and smaller ammounts in generations but i'm not so up to date on cypto and the economy but well i thought the economy is back to going up coz we can deal with COVID now and crypto.. usally when it tanks, it still retains a higher value than before it spikes, so is it an exception now?
@@nomad9098 There is a lot of room to cut prices because gross margin is very high. It costs about $150 to produce that GPU, but that's not counting any design costs, which are significant.
Just answer me this: Will RDNA3 be sexy?
The way I see it Lisa Chen saw how AMD had so many product offerings.... similar building blocks, but completely seperate and isolated teams. Such that the GPU on an SoC back before her time looked nothing like the GPU core functions and each with its own team. She then took after Henry Ford and said..... nah, 1 GPU, 1 CPU, etc. and we chop it up into their own silicon dies and we stitch the dies together to make radically different products, cut down on the number of teams, or make bigger teams to ideally make a better product. Brought the assembly line mentality to the silcon industry. she then did a mic drop and walked out of that meeting knowing she had the biggest dic in that room.
so take a gpu and stitch it with boats of scalable cores and vram..... you get radeon..... hoook it up to a cpu, you get soc.... and so on
No RDNA is not on N4. TSMC has been working to improve the HPC performance of ALL its advanced nodes.
The problem for TSMC and the companies who use them vs. Intel is clock speed performance. At some point in the near future Intel plans once again to be a main fab for other chipmakers and HPC is where they have an advantage IF they are on the same node as TSMC at any given moment. Intel 7 for instance scales very well for clock speed vs. power consumption all the way up to about 5.5GHz. TSMC N7 on the other hand struggles to hit that speed. I'm sure AMD and TSMC have had a few conversations on this topic and TSMC saw the writing on the wall.
I remember TSMC talking about this topic maybe about 18 months ago. In fact that's when I knew Zen4 would be able to clock higher than Zen3. How much? We're about to find out.
I don't think AMD would put its caching scheme on another die. They would want that as close to core as possible and probably their raster cores.
Intel 7 is comparable to TSMC N3, not N7. Also what I said was that in addition to having it on the graphics dies there could be shared LLC on the I/O die, I didn't say it would be exclusive.
@@Coreteks we're all good except the first sentence. Intel 7 was in fact Intel's 10nm node and is STILL called 10nm by some people although Intel refers to it now as Intel 7 and if one of their engineers called it 10nm they'd probably be talking to the big guy. Intel renamed their nodes about 1 year ago?? I'm on a tablet and it's hard to go searching for data.
They renamed their node to be in parity to what TSMC and Samsung are calling their nodes , BASED ON transistor density. So no, Intel 7 is a renamed 10nm node that has a transistor density similar, not exactly like TSMC N7. It's a little better than N7 for density.
Consequently this is why Intel renamed what was GOING to be their 7nm node, to Intel 4. It's based on the density of TSMC's naming scheme since that's Intel's main competitor for HPC fabrication.
If you feel this is inaccurate then please provide a link to a density comparison that shows differently or some other justification as to how you could possibly say Intel 7 compares to anything other than TSMC N7, although to be fair to Intel you'd now say it compares to N7P. Maybe even Now N6P if there exists the node.
As a side note and once again I'm on a tablet and I really don't like using one, that older Infinity fabric AMD was using seems to me like it would be too slow for this type of graphics compute. The way you have laid out your diagram for a possible chip layout seems like it would require a different connection scheme, almost similar to Intel's tile connection which is parallel instead of serial. I HOPE this is the case, otherwise I see issues with how fast this can possibly be. Or better yet different companies including AMD and Intel are working together to produce a common interconnect and the way that has been shown puts one die next to another which should be parallel data transfer not serial. Any news on this?
RDNA 3 + Ryzen 4 gaming laptops being released this/next year are going to be insane.
For reference, the rx6800m eats 145w of power and is within 10% of a 6700xt. The 5900hx gets close to a 5800x. And this laptop gets 10hr battery life for a 1080p panel. And costs $1400ish.
So just imagine a next gen rx7600m + ryzen 5 7600h at $1000 packing 90-95% of 6900xt performance at 1080p, with 6-7hr battery life with just 60-70whr battery, will just kill nvidia.
That is, if AMD isn't a moron this time. Like pricing the 7600m more than a 4060m while having similar performance.
Are there even enoough RDNA2+Ryzen 3 laptops out there at the moment?
AFAIK, the only "mainline" laptop that's gonna release with that combo is the not yet released Lenovo Legion 7. The other one I know is the Asus Zephyrus G14 but it turned out to be a kinda hot mess. Can we really be excited about next gen AMD laptops? When would they even release, if they do?
@@frieren848 Have you not seen the rog strix amd advantage? The g14 is fine, it doesn't run hot.
Also, 12th gen and ryzen 6000 cost too much on laptops right now. You can expect zen 4 laptops by around dec.
@@siyzerix hmm I didn't know there was a new an updated AMD Rog Strix 2022 model, i will have to check it out
I know that Zen 4 is imminent this year, but aren't we talking about RDNA3 here? And by RDNA3 I mean the RX7000s or whatever the next gen laptop cards are gonna be called. I'm way more interested and excited for the GPU jump compared to the CPU jump because current CPUs are already powerful enough for laptops AND hot enough to be intrusive with battery and thermals.
@@frieren848 No, there's no new rx6800m rog strix laptop. I was referring to the 5900hx + 6800m strix as a example of why all amd laptops are great. That thing has a great battery life, as in ultrabook territory. So just imagine ryzen 7000 + rx7000 mobile series battery life, even on smaller batteries. So all AMD laptops should be able to undercut nvidia + intel.
If AMD doesn't bring in genuine alternatives at all price points, then I've had enough of their incompetence. The rx6800 on desktops could've easily been put in laptops as a rx6800m. I suspect that was likely the plan. But nvidia wasn't competitive enough so AMD just put in a rx6700xt in to laptops, called it the rx6800m because that alone is enough to compete with nvidia.
@@siyzerix The new Legion 7s are supposedly gonna have a 6900HX+RX6850M XT option so I hope that one delivers! Should be better than the 6800M for sure.
I don't think we're gonna see a big leap from that combo in a long time because AMD will focus on their Zen 4 APUs first and foremost (so better CPU but weaker GPU obviously), I'm not used to their roadmap but their laptop RDNA3 discrete GPUs are probably a looong time away from now.
Die size is too big for 5nm, 60m2 to 75m2.
if amd gets viable for ML I might buy one
my guess is 1500 usd for 7900 XT
It's not milimeters squared, it's square milimeters☝
late september is here and not yet able to buy it :) hope they are available soon for a good price vs Nvidia since the prices at nvidia right now are a joke
Another note, Nvidia is ALSO going to be using TSMC this time around. If they're on the same node AMD gets no advantage this time around vs. last gen. They would have to get almost all their advantage based on this chiplet design or the absence of other functionality vs. Nvidia such as less powerful AI, RT cores, etc... In other words I'm a little skeptical of that comment from person from AMD saying the competition will have to push power a "lot" higher to get the same performance. Geez I'm pretty sure that's why Nvidia moved back to TSMC. Higher clocks with less power consumption which I don't think Samsung die is good at.
@@superkoopatrooper4879 So you're one of the people that's got caught up into all the rumor hype. At MOST Nvidia has a single halo product that consumes 550 - 600W.
The problem with rumors is the can become fixated on a SINGLE data point. Nvidia for instance designs a connector that can handle a certain power delivery and all of a sudden OMG ALL OF NVIDIA GPU's REQUIRE A NEW PSU!!!!!!
No, Nvidia will ALSO get more perf/watt just like their last gen products did. In fact for what Ampere delivers in raster PLUS RT performance they're power consumption is not much off AMD considering AMD couldn't touch Nvidia in RT performance, like no contest. Nvidia got more of a boost in perf/watt than AMD moving to current gen products. People somehow forget this, and for whatever reason think Nvidia wants you to have to buy a new PSU. BS.
Most Nvidia GPUs will be not much off AMD's for next gen or they'd lose sales.
Nvidia has a heck of a lot of extra transistors. Things like tensor cores eat up a lot of transistors. You need to be a lot more aggressive with power gating.
By building smaller chiplettes and emphasizing more efficient memory access, and does indeed have an efficiency edge. But it's like racing a Ferrari against a Viper. Brute force versus a high revver.
Yes and is more than certainly looking to at least double their performance. And Nvidia can do this but at extreme heat power and cost.
@@donh8833 AMD doesn't compete in RT performance so OF COURSE Nvidia has to use more transistors. Their image enhancing technology is also better and requires more transistors. SO what?? AMD is trying to catch up to Nvidia and will add more transistors to do exactly the same thing.
The point being skipped and this is usually because Nvidia guards information regarding their upcoming products a lot better than Intel or AMD is Nvidia gets the SAME perf/watt benefit AMD does by moving to a better node and in fact it could be MORE pronounced for Nvidia because they are coming from Samsung 8nm which isn't as good as TSMC N7, so if anything Nvidia has the bigger gain from the node switch.
So no, I don't agree and once again too many people got hooked into the rumor of high power consumption for Nvidia vs. AMD.
Sure Nvidia wants to make a halo product that beats AMD in every regard other than power consumption but the rest of their product stack is going to get a better perf/watt uplift than AMD similar products when AMD is trying to catch up to Nvidia for RT performance and image enhancing combined with a speed improvement (FSR). AMD has to increase transistors to do that.
So sorry.
@@superkoopatrooper4879 Both my replies to you I said the same thing which is I could see a SINGLE halo product pushing voltages that high but I don't see it being 600W. That's me personally based on past rumor and then what ends up being reality. I may have worded the comment in the two replies slightly different but the intent was the same.
To me it now seems like you're looking for a way to come at me instead of dealing with the topic so before I get to the point of wanting to call you an ass or something else we'll just stop here. Figure out how to converse without resorting to looking for some way to go after the other person dude.
@@citadelbarker6106 I'm not going to get into it. But it's obvious you have way too much internally emotionally invested in Nvidia winning. NVIDIA will still win on rt and maybe marginally on upscaled image quality. (They will lose that battle the way they lost adaptive sync/gsync). But I think amd is going to win perf/watt and 4k rasterrization.
am4 will get 5950X3D with ZEN 4 am4
Bro, I love your channel, but you really need to take punctuation breaks. Really. Pause after a comma and stop after a period. It is so incredibly hard to follow for someone with ADHD or other neurodiverse conditions. Hope you take the comment in the right spirit. cheers.
@Jayanth Kumar thanks for the feedback, I'll try and improve
Seriously. The constant word stream is tiring.
The analysis / prediction was entertaining and interesting as always, even if it won't turn out to be true, the only issue I had was with the statement at @20:02 that 6950XT is 2% better than 6900XT which is complete BS! It's about 10% better and it has better perf/watt ratio than the 3090 Ti does, not to mention the perf/watt ratio from 6900XT to 6950XT is better in every way than from 3090 to 3090 Ti.
With the state of the gaming industry, and the world- why should I care about this technology anymore? because I don't.
The gaming industry is a joke, and so are our economies. Until that changes I don't see any point in paying attention.
Yep, I agree entirely with your sentiments. Why should I care when I can't aford these things any more.
@@DaystromDataConcepts Right- that just adds one more layer to why not to care.
I can afford all of this stuff, easily. I just don't know why I'd buy it. I could go out and spend 10 grand on a PC right now but why.
I built my 3950x system anticipating GTA6- which didn't come. Starfield is delayed indefinitely- those are the last two reasons I even own a gaming PC. I might as well sell my system at this point.
@NANOHORIZON I hear you. I'm hoping Bethesda will surprise us and Starfield is actually good. Skyrim in space at 4K60 would prob make me want to upgrade (because I already have an OLED, otherwise buy an OLED before upgrading GPU!)
Bro are you a tech youtuber or ASMR youtuber?🤣
ahh here comes more core spaghetti
Reads N5P as if it's 5NP for no reason lol
Is it bad that I've started asking a part of my brain, that I now call Corteks, when the next Corteks video is coming out? =(
There's a lot to unpack here, almost all of it your incorrectness @Coreteks. I would like to help you:
The interposer or IO die will be "on a node" - those two constructs are not actually able to be considered similar at all.
"nVidia only does GPUs" - demonstrably wrong! nVidia acquired Mellanox and this is the basis of their Bluefield/DPU product, not to mention their Tegra (heard of the Nintendo Switch?) and automotive lines.
"Infinity Fabric Controller" - Infinity Fabric is an evolution of HyperTransport, itself an evolution of the EV6 protocol used by the Alpha.
@7:32 - you're stating that an IO die and interposer are equivalent. This is completely incorrect - an interposer provides connectivity, not functionality.
@8:55 - you are speculating as to the reason for the temperatures seen on the 5800X3D - it's not universally applicable. Other leakers have indicated inverse cache stacking for future products, further detracting from your speculation.
@9:07 - "Memory transistors aren't scaling anymore" - again, demonstrably wrong given information published by Samsung, Intel and TSMC.
@9:53 - You cannot infer anything based on your attempt at analysis. You do not account for architectural improvements or other factors at all.
@10:22 - "Which maxes that 222 millimetres squared" - the unit is "square millimetres" - please at least get the units right.
@10:41 - Again with the "millimetres squared" - completely incorrect.
@12:01 - Again confusing interposers and IO chiplets.
Happy to help you out.
Jesus man, even if you do disagree with the man, there are ways of expressing it without being an absolute dick about it.
Until someone has the actual product, most of this remains speculation and educated guessing. At least he made a far more coherent video than most other leakers; let’s be appreciative of that.
Feel better for being an insufferable pedant?
Fighting Asian with Asian.
1:06 clink the links.
What good is rdna3?
I am barely over rdna2!
Wait .....I'm using only Vega 7.
So RDNA 3 in 2022 ?
In a few months
2/3
no matter what the design or issues or challenges are
AMD wins and can do anything
they have designed the best "glued cpus" in life as called by intel
kicking intels ass
so..gpu will be something great in the next round
as rdna2 gpus have also beaten nvidia in fps and uses less power
cant wait for rdna3
so it seems they may have one memory chip instead of the usaul many to make 8 or 10 or 12 or 16 or 24 gigs alltogether
this will kill latency issues..which is great
4080 FE $999 MSRP? ;p
no one cares anymore my man
companies have chip shortage and they selling stuff well
people have money shortage and they can't afford stuff like old days
N-veeeeeedeeea vs AMDeeeeeee!
😮
yay
Corteks o pavão. ;)
I still wanna see proper ray tracing implementation from AMD. Until then im sticking to Nvidia.
8:40
2.5D stacking? For all the ASMR sexy narration you do, that was a mind boggling thing to say. Like jeez Rick, read your script and think about what you’re saying. 2.5D is a wonderful idea on a two-dimensional screen, but it ain’t how this physical world works. Dare I say that was stupid?
Another video of coreteks not knowing why there's a deEssing option in video/audio editing software that gets rid of ear piercing: sh, ch, th sounds in narration.