Is there a BIGGER RDNA3 GPU coming?

Поделиться
HTML-код
  • Опубликовано: 2 окт 2024
  • Urcdkeys Black Friday
    25% off code: C25
    Win10 pro key($15):biitt.ly/pP7RN
    Win10 home key($14):biitt.ly/nOmyP
    Win11 pro key($21):biitt.ly/f3ojw
    office2021 pro key($60):biitt.ly/DToFr
    Affiliate links (I get a commission):
    BUY an LG C1 OLED 48'': amzn.to/3DGI33I
    BUY an LG C1 OLED 55'': amzn.to/3TTpQp9
    Support me on Patreon: / coreteks
    Buy a mug: teespring.com/...
    My channel on Odysee: odysee.com/@co...
    I now stream at:​​
    / coreteks_youtube
    Follow me on Twitter: / coreteks
    And Instagram: / hellocoreteks
    Footage from various sources including official youtube channels from AMD, Intel, NVidia, Samsung, etc, as well as other creators are used for educational purposes, in a transformative manner. If you'd like to be credited please contact me
    #7950XTX #blackfriday #supersale

Комментарии • 339

  • @DiabloMonk
    @DiabloMonk Год назад +39

    Video starts at 2:15.

  • @AdamS-nd5hi
    @AdamS-nd5hi Год назад +140

    They didn't use a silicon interposed. Gamers nexus did an interview with one of the engineers and the interposed is a thin film with the wiring on it. Not etched silicon

    • @opinali
      @opinali Год назад +28

      Not to mention that even etched silicon interposers can be fabbed with areas exceeding the reticle limit, via combination of multiple subfield exposures. This doesn't work for die logic which requires way smaller features, but interposers wires are enormous compared to transistors or even to memory cells (micrometers not nanometers). There are papers back to 2014 describing this. Interposer size is not a problem at all, AMD could make a chiplet GPU with massive package size if they want, the limits are only power/temp and interposers bandwidth. The Navi31 SKUs announced so far are nowhere near those limits.

    • @Aparviel
      @Aparviel Год назад +37

      The more time goes by, the worse Coreteks's videos get... Talking 1/6 of a video about the fact which is not true and easily checkable is 1st tier analysis I guess.

    • @cosmic_gate476
      @cosmic_gate476 Год назад +6

      @@Aparviel infotainment is infotainment lol

    • @jamzqool
      @jamzqool Год назад +4

      @@cosmic_gate476 yeah, it's getting more obvious, sigh...

    • @johnfurey3593
      @johnfurey3593 Год назад +1

      @@opinali microns

  • @big.atom37
    @big.atom37 Год назад +32

    While you can't make a very big interposer you can connect the two of them via an organic substrate. The throughput will be quite massive due to the large area. You can also put a 3D-stacked chiplet on top of two GCUs to connect them. It will be much harder to package but the benefits will be substantial.

  • @nathangamble125
    @nathangamble125 Год назад +19

    TSMC can scale large multi-GCD GPUs which require high-bandwidth interconnects using their "LSI" technology, which uses multiple separate bridge chips rather than a single large interposer. It's similar to the EMIB technology which Intel uses in Sapphire Rapids and Ponte Vecchio. LSI might not yielding high enough to be viable yet for consumer GPUs, but it's already being used in the M1 Ultra, so I'm very surprised that you didn't even mention it as a possible future solution.

  • @VeggieManUK
    @VeggieManUK Год назад +12

    Over at AdoredTV, Jim paints a very different picture to this one, and considdering his track record with ZEN, I tend to agree. The 4090 will be the last win for Nvidia.

    • @NightRogue77
      @NightRogue77 Год назад

      ESPECIALLY since he doesn’t seem to make any videos any longer, unless he’s got something really important to say

  • @Diglo1
    @Diglo1 Год назад +31

    @Coreteks The retical limit of the substrate is only an issue if AMD plans to only 2D package their MCDs. Knowing that AMD has the ability to 3D stack using trough silicon vias I don't think AMD sees retical limit as a problem they way lets say Nvidia does.
    But I do agree that it's not likely for AMD to make two GCDs on a substrate and instead they would simply make bigger GCD die.
    Also trough silicon vias reduces the length of the wiring and complexity vs their 2D packaing which reduces power.
    Also it has become obvious that cache is starting to occupy more and more space relative to the logic which AMD is trying to solve with separate MCDs while saving cost.
    So it's not as simple as we might think, but this is a long game issue.
    I think ignoring 3D stacking MCD dies is something we shouldn't be doing.

    • @jemborg
      @jemborg Год назад +1

      This comment you and others are making that this 3d aspect should be addressed is 100% correct.

    • @mattgreenfield8038
      @mattgreenfield8038 Год назад

      Just get a 4090. There is nothing close to it. I actually laugh out loud at the performance. I had a 3090. I'm amazed at the performance increase this 4090 gives. Seeing all of these wannabe experts debate how AMD could "win" is really starting to look sad at this point. Just get a 4090 like the rest of us.. What a product. If you game at 4k there is only 1 choice this generation. I'm surprised Nvidia isn't asking 2500$US for this level of performance. AMD lost serious ground this generation. Their 1000$ 6900XT competed quite well VS 3090. This time it's a bloodbath.

    • @Diglo1
      @Diglo1 Год назад +1

      @@mattgreenfield8038 Just get 4090? No wonder prices are a total shitshow.
      You are paying hefty premium for 4090 and for what 15% more raster and 50% for raytracing? And even with raytracing (from extrapolated data) 7900xtx is still better value and totally playable. With raster 7900xtx is a steal. But I guess it's all just about raytracing better image or not...
      4090 1600$ msrb but cards selling +2000$ some near 3000$.
      So I can't figure out how you can realistically say just buy it...
      But thats besides the point. We were talking about the possibilities and technology.
      Yes of course there will be those who want AMD to win and it would be good for the market.

    • @mattgreenfield8038
      @mattgreenfield8038 Год назад

      @@Diglo1 if you had a 4090 you would know what I'm talking about. The performance is outstanding. Not my fault prices are what they are. The AMD presentation for the 7900XTX was the most disappointing thing I've ever seen. Lisa couldn't get off that stage fast enough. AMD was so close last gen...now they are getting destroyed? it's a blowout for the 4090 across the board. It's not debatable. I had 3090. I sold to a friend for 300$. 4090 is so good, everything else this generation is a total loser. There's no reason to even get a new card, unless you're getting a 4090. To run 4k max settings(yea, RT is a setting) and to do it at Native 4k with this level of performance is unbelievable. The 4090 really is this good. The 7900XTX should be called 7800XTX because AMD couldn't make a "90 class" competitor this gen. I'm sure RDNA 4 is going to be amazing, you just wait.

    • @Diglo1
      @Diglo1 Год назад

      @@mattgreenfield8038 You are not making any sense. It's like you are saying buy a Ferrari because it's amazing and expect everyone to have that capability.
      You are not grasping the fact that that GPU is way too expensive and is even more expensive then it's MSRB.
      You don't care about the price and that's the problem.
      You are also being super disingenuous.
      You obviously didn't see Nvidia's 4090 announcement you know the 2-4x better performance claims (more like 50-100%), because if you did you wouldn't be calling 7900xt/xtx launch bad. At least AMD didn't seem to LIE about their performance.
      I don't get why you say AMD is getting destroyed... 7900XTX is 4080 competitor and they will be trading blows in terms of RT and raster.
      Why are you only focusing comparing it to 4090?
      Now will AMD release 4090/ti competitor? I don't know.
      I know 4090 is fast, but it makes no sense in terms of cost to buy. 7900xtx with it's price tag delivers near same raster and more then enough RT performance. But hey if RT means so much to you I guess you didn't pretty much have a choise.

  • @TheAzzzzzzzza
    @TheAzzzzzzzza Год назад +29

    I think nvidia pre-emptively launched the 4090, as amd had the 4080 covered. They usually just launch the xx80 and save a bigger product for later.
    The chiplet approach let's amd reuse the memory cache chips with different cores. They could make a 512bit 32gb card or more likely by using stacked cache chips get away with the extra bandwidth to feed a larger core using maybe faster 24gb on 384bit bus.
    Amd will most likely announce/preview the 3d stacked graphics when they actually launch the 3d zen4 cpu

    • @CrackaSlapYa
      @CrackaSlapYa Год назад +1

      DAmn, that is a LOT of wishful thinking right there!

  • @michaelgottsman3767
    @michaelgottsman3767 Год назад +56

    The 4090/80 and 7900 basically don't matter. The first card ending in 80 in the Steam hardware survey is 15th. The 60s are what get used. I remember the debacle that was the vega 64, but that was followed by the rx 580 and 570, the best mid range cards at the time (and STILL the most common AMD GPUs 3 gens later). Given how AMD's GPU market share has tanked, I'm hoping they will get aggressive down the stack like they were with the 580/70. I don't care if AMD beats the 4090, I just want them to embarrass the 4060, All this focus on the high end and who "won" just seems weird to me when we haven't even seen the cards people will actually use this generation.

    • @europason2293
      @europason2293 Год назад +10

      The problem is the halo effect. Whoever holds top performance on the their Halo product, consumer perception trickles down the product stack. The average consumer doesn’t actually do that much research on how each and every card performs relative to one another; they see Nvidia is the absolute fastest at the top of the stack and that sets their precedent for whatever tier of performance they wish to buy.

    • @michaelgottsman3767
      @michaelgottsman3767 Год назад

      @@europason2293 AMD's reputation was bad before the rx 64. It was even worse after, but the 580 still did great. Their reputation is better now, and they have a far, far better halo product than the 64, so I don't see how something 20% better value than the 4060 would flop. The halo effect is a factor, but enough people do their research to eat at least 30 or 40% of the 4060 sales, otherwise the 580 would have flopped. For context, the most recent numbers have AMD at singe digit GPU market share.

    • @Drip7914
      @Drip7914 Год назад +4

      @@michaelgottsman3767 the 6600/50xt are both better than the 3060 in price AND performance but even the 3090 easily outsold their entire lineup. AMD needs to innovate to win but they’re too busy trying to catch up. Look at all the selling points for these cards like Dlss, Rt etc; Nvidia are creating the benchmark for AMD to reach and now AMD is even copying dlss 3.0. They MUST create a completely unique selling point and improve their productivity performance

    • @europason2293
      @europason2293 Год назад

      @@michaelgottsman3767 I wasn't even really saying that their entire lineup is going to flop. I just don't see them making some huge gain in market share like everyone is hoping. And don't get me wrong, for everyone building PC's in the midrange, I pretty much recommend Radeon cards to everyone because they're undeniably better value, however their market share sucks in spite of this.

    • @chronosschiron
      @chronosschiron Год назад +1

      and i see what pc desktop makers are all doing putting in 3060s on amd motherboard and cpus
      see that go look
      its a way to keep both happy
      and it might mean more collusion again then we think at least at that level
      i wanted to see all amd systems but WHOLLY SHIT they are far more expensive then the parts while
      you get a 3090 in a system that 3090 as a part is like only 200 less then a whole pc under it
      thats bullshit
      thatmeasn they can make all this and sell system parts much cheaper and i dare anyone to do the actual research caus eyou will see exactly the sick bullshit i see too
      it really flies with nvida cards

  • @viktortheslickster5824
    @viktortheslickster5824 Год назад +24

    I think if AMD were working on a Navi 30 die we would have heard about it in the leaks. I doubt adding additional cache to the MCDs in a 7950xtx refresh would improve performance much - that chip is compute limited and so we really need more shader engines to be added, not additional memory bandwidth...

    • @CarlosLauterbach
      @CarlosLauterbach Год назад +2

      Dont confuse memory bandwith. Cache helps to overcome the memory bandwith bottleneck, but only when using redundant data. Real memory bandwith is extremely important to feed the gpu with fresh data from the vram

    • @viktortheslickster5824
      @viktortheslickster5824 Год назад +1

      @@CarlosLauterbach Yes I agree. But cache is still part of the memory hierarchy, and Amd market their infinity cache as a 'bandwidth amplifier'. I guess that means a lot of data can be stored in a cache pool and reused, rather than fetching again from vram which will always have a higher latency penalty.

    • @CarlosLauterbach
      @CarlosLauterbach Год назад

      @@viktortheslickster5824 "I guess that means a lot of data can be stored in a cache pool and reused, rather than fetching again from vram which will always have a higher latency penalty." That's right. That's what i mean by "redundant data".

    • @denverbasshead
      @denverbasshead Год назад

      First AMD architecture that's isn't memory bottlenecked lolol

  • @TheHighborn
    @TheHighborn Год назад +69

    I'd be VERY surprised if there'll be a double logic chiplet GPU. Listening to some interviews, it's VERY unlikely.

    • @JohnnyWednesday
      @JohnnyWednesday Год назад +3

      unlikely using current methods perhaps - but on-die lasers and chip to chip communication is a thing now - easily surpasses the bandwidth requirements, it's just not cheap enough yet but that's how everything starts.

    • @georgwarhead2801
      @georgwarhead2801 Год назад +5

      @@JohnnyWednesday its not about the bandwidth, its about how many traces they need to fit in there, this is at the moment the real problem according to a AMD chef engineer

    • @lazerusmfh
      @lazerusmfh Год назад +2

      Remember, everything for the next gpu is already pretty much designed, this is how gpus will get faster as the process shrink wall screams closer

    • @nedegt1877
      @nedegt1877 Год назад +1

      AMD's MI300 has 4 GPU chiplets.

    • @greebj
      @greebj Год назад +2

      enterprise workflow is a lot more distributable than consumer graphics. The death of SLI is a study in how latency from shifting frame data around causes insurmountable problems in consumer graphics

  • @scarletspidernz
    @scarletspidernz Год назад +7

    It'd be cool if 7970 3D Ghz Edition = Homage to 7970 HD Ghz Edition as the top tier (everything pushed to the max like the 4090ti will be) when the refreshes come out.

  • @paul1979uk2000
    @paul1979uk2000 Год назад +7

    The price point of the 7900XTX is fine if it delivers the performance AMD is saying, after all, let's remember that it's $600 cheaper than the 4090 and $200 cheaper than the 4080.
    Could AMD deliver at a cheaper price point? yes but history has shown that it has little impact on market share for AMD unless they win mindshare.
    The real worry I see and where I think AMD could be making a big mistake is the gpu tiers below the 7900XTX, so the XT version, the performance to the price is a bit expensive and it does suggest the lower tiers won’t be much cheaper, this could be a massive mistake on AMD at a time when Nvidia is really messing up, if AMD were smart, they would go aggressive on the pricing where the mainstream market is to win market and mind share and going on the pricing, I can't help but think it's not going to change anything in AMD's favour even with Nvidia messing up because the truth is, when you're the underdog, you have to be a lot more aggressive against your rival, either on performance or price, either AMD doesn't get this or they are not even trying but it's not going to win market and mindshare until they change and the real money is by winning mind and market share as Nvidia and Intel are showing.

  • @nedegt1877
    @nedegt1877 Год назад +16

    I think the issue is different. AMD forced Nvidia to release faster and faster models almost every generation. The 3090Ti wouldn't exist if AMD was "losing" in performance. And of course, Nvidia was smart with marketing Raytracing because AMD's RDNA is pretty fast without Raytracing. The MI300 is a good example of what AMD is capable of. Maybe consumers won't see such a device anytime soon. But a cheaper MI300 for the highest-End could be something. A Threadripper like idea. 3D Stacking is by the way not limited to Cache only as per AMD's own statement.

    • @RakeshMalikWhiteCrane
      @RakeshMalikWhiteCrane Год назад +2

      Actually, it's only been recently that AMD was able to compete. The Vegas were definitely not competitive in most workloads. RDNA1 was a conservative generation, where AMD held back a bit in order to get the product on the market, but was pretty honest about the fact that a lot had been dropped from it because it needed the next node to deliver what it really wanted to.
      RDNA2 however exceeded expectations. AMD did a really good job of both promoting it and sandbagging it, so I think nVidia got blindsided by that.
      We know that nVidia is working on chiplet designs, but pushed forward with a monolithic halo product for the Lovelace generation because of the RDNA3 looming on the horizon.
      AMD has been cleverly sharing engineering knowhow between the CPU and GPU design teams, which as AMD has mentioned helped achieve higher than anticipated clock speeds and efficiency, just like we're starting to see in Zen4 as the Zen4s start arriving.
      But the real dark horse is ARC; that probably doesn't worry AMD that much, but nVidia stands to take a HUGE bath because of it.
      Laptops comprise the lion's share of the personal computing market now.
      Right now most high end gaming and mobile workstation laptops use nVidia GPUs.
      Before RDNA2, ALL high end gaming laptops and most mobile workstation laptops used nVidia GPUs.
      That's a big deal, especially with Intel getting into the business at long last.
      Now that AMD and Intel BOTH have solid (in AMD's case, genuinely competitive) GPUs, they're both able to market high spec machines using technologies that only work in all systems with both Intel CPU and GPU or all AMD CPU and GPU...
      THAT is a huge deal. It's going to drive nVidia to push the halo products harder I expect, because it has no CPU to throw in the ring, unless it can revive Windows on ARM and build personal computing versions of Grace and Hopper... which seems unlikely to be feasible any time soon, because those are behemoths.

    • @nedegt1877
      @nedegt1877 Год назад

      ​@@RakeshMalikWhiteCrane No not only recently, it's also just Vega that failed to do what many hoped.
      For the rest of what you're saying, I've been saying most of that more than a year now. I know what you mean and, if you've seen some of my replies on some Tech tubes, you'd know that I almost always say that Nvidia's days on the consumer PC market are numbered. I even dare to say that there will be no RTX 5000 cards for consumers.
      Nvidia basically have nothing to compete with on the largest portion of the market. That will cost them a lot of money. AMD has always been much smarter than both Intel and Nvidia. That is why they still exist and even managed to become the industry driver! AMD deservers a lot more credit than they get.
      Nvidia betted on the wrong horse (ARM) That can't compete with x86, Nvidia + Arm will be a niche, not worth the effort. Nvidia failed to develop its own x86 platform and now they're forced to look at 'AI Factories'. Because in that market the CPU architecture doesn't matter.

  • @BartoszBielecki
    @BartoszBielecki Год назад +4

    Easy mode: binned 7900XTX
    Medium mode: slightly larger GCD
    Hard mode: 2 x GCD

    • @ChittyBang66
      @ChittyBang66 Год назад +1

      Hardcore mode: 3D GCD and 3D MCD

  • @pete6300
    @pete6300 Год назад +14

    AMD could have matched the 4090 in everything but ray tracing if they had also used the 4nm chiplet and 3d cache. The problem is AMD doesn't benefit from that. Nvidia owns so much marketshare that people would always buy their product over AMD. It's similar to CPU's, AMD has matched or outperformed Intel for last few years. With that they have barely taken market share. Lowering margins to create a halo product that does the same thing with Nvidia doesn't make fiscal sense. I think that's why they are trying to go after the mid tier customer as a way to gain marketshare and get people used to AMD products.

    • @NightRogue77
      @NightRogue77 Год назад +1

      Impressive….. almost everything you said was wrong

    • @WayStedYou
      @WayStedYou Год назад +1

      What? their servers have gone from

  • @vensroofcat6415
    @vensroofcat6415 Год назад +3

    Guess what, Terminators won't be all powerful either just like humans. Computing has limits too. Who would have thought.
    AMD has made a technological bet and that's about it. Don't make religion out of it. You can't add chiplets forever either. It boils down to bottlenecks for all setups.
    It kind of feels like AMD is saying this "7/10" is reasonable and best for you. And it probably is. Just sometimes humans aren't all rational.
    Anyway, hyped to see RX 7900XTX real performance and price. Because somehow I suspect it won't be even close to 999$ here in EU and with 99% AiB. Thank you, next.

  • @surferboyuk84
    @surferboyuk84 Год назад +6

    AMD are going in the right direction I've been pick energy efficiency for 10 years now.

    • @ImDembe
      @ImDembe Год назад

      It's to bad that they try to match Intel in terms of speed on the cpus, none of the AM5 or Comet lake releases are efficient out of the box. The good is it dosn't take alot of time or effort to change that.

  • @username65585
    @username65585 Год назад +20

    According to the AMD engineer that Gamer's Nexus interviewed, they could not do multiple compute chiplets due to bandwidth requirements.
    edit: I got to the point in the video where he mentioned this.

    • @Diglo1
      @Diglo1 Год назад +10

      GCD will stay monolithic and they can double it's size. There is no need to only 2D stack since AMD has the ability to 3D stack. It is likely that AMD played it safe this time around and they will move on to 3D stacking and retical limit of the substrate won't be an issue. 3D stacking also reduces power due to shorter and simpler wiring.

    • @JohnnyWednesday
      @JohnnyWednesday Год назад +1

      You listened to that entire interview and that was your only takeaway?

    • @sudeshryan8707
      @sudeshryan8707 Год назад +3

      @@JohnnyWednesday exactly my thought 😅

    • @Diglo1
      @Diglo1 Год назад +3

      @@JohnnyWednesday If you are talking to me then yes. Coreteks really didn't say anything about 3D Vcache other then mentioning the current design further stacked with more cache as 3D and what he said made sense.
      However I was talking about making a bigger GCD and moving MCDs totally to 3D space using trough silicon vias just like they did with Ryzen 3D Vcache getting rid of the substrate limit which would then be the same as Nvidia's, but having possibly more die area available as 3D package and still having the advantage of using multiple nodes offseting the packaging costs and helping with binning. 3D also makes the traces/wiring shorter using less power then 2D.
      Also since cache is using more and more space relative to the logic MCDs are a brilliant way going forward.

    • @JohnnyWednesday
      @JohnnyWednesday Год назад +3

      @@Diglo1 - I wasn't talking to you no - but doesn't mean I'm not happy to :) I agree with you - seeing that they couldn't go total chiplet because of the interconnect density required - deciding to move what they could to chiplets? genius - it may not be the true cake but the main die will now be smaller and their yields higher - good call!
      Intel's higher density silicon to silicon tile interconnects might evolve into something that'll work for GPU chiplets - plus recent advances in laser communication on chips might see future chiplets being connected with light - bandwidth is super, super high and the chips are super, super small - exciting times!

  • @gamingtemplar9893
    @gamingtemplar9893 Год назад +7

    For games...... there is no problem, the improvements come from software side, 5.1 is only a step away of murdering "raytracing", so probably by UE5.5 or 6 no one will know what raytrace ancient nvidia gimmick was. Next year and on games will already look like movies. For other tasks like AI and whatever gpus might be needed, that can be different but I don't think it will change that much.

    • @Deliveredmean42
      @Deliveredmean42 Год назад

      Which is funny you say that, since it does use both Pathtrace and Raytracing in the engine. Just probably don't need those Tensor cores to make it happen I reckon. As long as you have an accelerator of sort, it can work. Some may even consider on Mobile devices, as I am reminded of that one cool Mobile Raytracing/Pathtracing demo all those years ago.

    • @JohnnyWednesday
      @JohnnyWednesday Год назад +3

      Exactly - RTX has been a thing for years now and support just hasn't caught on. There's no cheap RTX card so the vast majority of people can't even test their code. Nvidia tried to use their position to force a standard, the industry isn't interested - they did a Microsoft.

    • @JohnnyWednesday
      @JohnnyWednesday Год назад +1

      @@Deliveredmean42 - It's a complex subject but the ray-trace alternatives to RTX are running in standard compute. BVH is the culprit - Nvidia chose it for the ease of hardware implementation but it has now been supplanted by far faster spatial partitioning schemes.
      Now their API and silicon is locked into BVH and general purpose compute is nipping at its heels using a superior algorithm.
      Nvidia have properly dropped the ball on this one.

    • @Drip7914
      @Drip7914 Год назад

      But the highest setting of Lumen still favours the RT cores so Nvidia will still have the advantage there lol

  • @adi6293
    @adi6293 Год назад +6

    I think the 7900XTX is priced fine, I will have no issue picking one up

    • @NightRogue77
      @NightRogue77 Год назад

      Anything with ridiculous 40%+ margins, is not priced fine

    • @adi6293
      @adi6293 Год назад

      @@NightRogue77 Well it's fine for me

  • @dullahangaming5107
    @dullahangaming5107 Год назад +4

    Why would AMD waste time making Prototype GPUs like 4090 that 0.01% of gamers will own like Nvidia. They're already price per performance champs and probably going to beat the 4080s, the real nvidia consumer flagship, at $300 less.

    • @gamingtemplar9893
      @gamingtemplar9893 Год назад

      Market decides price not AMD or Nvidia. Prices are correct. The only way to lower prices is to have real competition and Liberty, but this market is so dumb that will ignore intel or the chinese cards because "china bad" or whatever.

    • @ghoulbuster1
      @ghoulbuster1 Год назад +2

      True but you gotta remember gamers are stupid and if you offer "The best™" they WILL buy it.
      No matter the cost.

    • @TropicChristmas
      @TropicChristmas Год назад +1

      @@ghoulbuster1
      It's more than that too. The 'halo effect' carries all the way down. If Nvidia consistently has the fastest GPU, ignorant consumers are going to assume that the lower sku's are also better than their Radeon counterparts. People want 'premium' even at the 300 dollar level.
      Remember how many 3050's sold? Even though they were priced similar to a 6600xt half the time, and even the 6600 beat it?

    • @TropicChristmas
      @TropicChristmas Год назад +1

      Remember, part of the reason that Ryzen won out was because they were offering 8 cores for 300 bucks, at a time where you had to spend like 800 plus dollars to get that on HEDT from Intel. AMD's top offerings pooped on the desktop Intel chips in core-count for like 3 generations, took the multi-core crown every time.
      Radeon has not priced that aggressively, nor offered an advantage like that. Not to mention, Nvidia never stagnated. Radeon isn't going to get market-share without something drastic.

    • @sevdev9844
      @sevdev9844 Год назад

      I agree that the main consumer market it more important but some games really profit from an 4090, maybe especially VR. Look for Cyberpunk VR with a 4090 (Beardo Bungo or so).

  • @tilburg8683
    @tilburg8683 Год назад +2

    I definitely have to say if AMDs second best GPU would've stayed at 650 I would've definitely gotten that one. But now they've raised it from 650 to 900 I'll pass.

  • @denvera1g1
    @denvera1g1 Год назад +7

    There might be a 500+mm GCD out there, but it is likely only for enterprise and still a combination of VEGA and RDNA arcitextures, though with the doublepumping of RDNA3, they may switch CDNA3 to be based fully on RDNA3

  • @RageQuitSon
    @RageQuitSon Год назад +4

    Can't believe it's almost December. 7950 xt/xtx is reserved for a miracle I imagine. And I do agree the GPU prices are unfortunate but that's the world we live in. Intel will price their cards similarly when they catch up and they might actually go higher since Intel is a bigger name than even Nvidia. (I mean in the far future when people forget about Arc not being good)

  • @johnnyp6202
    @johnnyp6202 Год назад +3

    Isn't there a rumor that the Navi 31 has a bug that is preventing the 3Ghz+ frequencies and was fixed in the Navi33? If so, a refresh 7950xtx might be significantly faster. MLID actually said at one point they were skipping RDNA4 and just doing a refresh before Navi 5 but now Navi 4 is back. Remember AMD is always going to be a unreliable narrator. If they have found ways around the problems with multiple GCDs then it could both explain RDNA4 coming back and the fact that AMD wants you to think it is impossible. I actually believe the bug rumor because AMD missed expected targets so badly.

  • @ПётрБ-с2ц
    @ПётрБ-с2ц Год назад +2

    08:40 "OEMS would be limited"
    no they would not. It's totally possible to populate only part of memory channels with on-package memory.
    09:00 "adding thing to MCM needs balance of performance cost, power and profit margins"
    are custom SKUs a joke to you? Intel is making quite a lot of specialised SKUs for the richest custoomers
    05:50 "to create a GPU as powerful as 4090 but in chiplets AMD would need massive GPU package that would be much larger than reticle limit"
    09:55 "interposer would exceed the reticle limit"
    research TSMC's CoWoS please. They announced 2x reticle CoWoS in 2020 (1700 mm2).

    • @nathangamble125
      @nathangamble125 Год назад +2

      I think it's ridiculous that Coreteks is completely ignoring CoWoS and LSI, which are designed to resolve the exact problems he brings up and which TSMC is already using to make the Apple M1 Ultra.

    • @ПётрБ-с2ц
      @ПётрБ-с2ц Год назад

      btw, on consumer PCs populating only part of memory with high speed memory would not be wasteful because of impaired Windows kernel
      but Linux rules in enterprise market, and if AMD ever wanted to make memory non-uniform they are free to improve Linux kernel to support it better

  • @N4CR
    @N4CR Год назад +1

    If they do a refresh it needs to be called the 7970, in homage to one of the greatest AMD GPUs in the last 2 decades. 40% OC out of the box on a reference cooler? UNHEARD OF since.

  • @CarlosLauterbach
    @CarlosLauterbach Год назад +3

    7900xtx might be able to get very close to the 4090. However the 7900 xtx might have a looot overclocking headroom. Going from eg. 2.2GHz to 2.64GHz would might give 15% more performance which might be enough to match the 4090. A 7950 xtx with an advanced node and increased memory bandwith might be able to reach even 20% more performance. But we dont know yet, on the long run AMD might destroy nvidia by price/performance domination

    • @CrackaSlapYa
      @CrackaSlapYa Год назад

      LOL. No, dude. My 4090 for isntance games at 3090Mhz. Running up a 7900xtx to 2.6Ghz won't get within 20% of A 4090./

    • @CrackaSlapYa
      @CrackaSlapYa Год назад

      AMD has 8% of the discreet graphics market share. It's over.

  • @Greez1337
    @Greez1337 Год назад +1

    Honestly, who cares. We got a big recession coming. Tech is just not a priority. Especially when the games out are just cinematic movie garbage and craftathon survival games.

  • @DaveGamesVT
    @DaveGamesVT Год назад +1

    I don't need a 4090 killer, I need something at a decent price. Sadly AMD doesn't seem willing to deliver that.

  • @knghtbrd
    @knghtbrd Год назад +2

    Maybe there's a bigger RDNA3 coming real soon now? Not a chance. Chiplet CPUs were a bit of a slow burn, remember. It took time to get going before it was seriously in the race, let alone in the lead. Intel was able to catch up in the end, too. There absolutely will not be a competitor to the RTX 4090 in this first RDNA3 generation. Perhaps for the RDNA3+ we'll see AMD's top card besting the 5080 … but I kind of doubt it, and we're *at least* two generations before AMD could even reasonably take the performance crown, assuming AMD wants it. I don't know that they do. In fact, I think I have to agree that they almost certainly do not! If it means pushing power consumption higher than nvidia already has, I don't think AMD is interested. I don't think they should be interested. The world simply does not need a 600 watt GPU for gaming, let alone something higher than that. And that is before you get into all the physics problems that would exist in trying to do it with a chiplet model with today's technology.

  • @dslay04
    @dslay04 Год назад +1

    I think your analysis is way off. If they wanted to, they can combine to GCDs together just like on MI250.

  • @gusmlie
    @gusmlie Год назад +3

    Would love a add in card that was just a dxr accelerator, with the pcie bus speed now, a worthy successor to the voodoo 2.

  • @funkengruven7773
    @funkengruven7773 Год назад +1

    So where do you think AMD will go from here? Your video makes it sound like they've already reached the maximum size for a chiplet-based GPU, but you offered no opinion on how they might improve except for process improvements. I was hoping that we'd see even larger more powerful chiplet-based options in the future, but it sounds like that is impossible due to size, so what then? The hope was that multi-chiplet would give us a technological advancement, but it seems more aimed at cost savings/margin versus actually giving consumers something more powerful or advanced. AMD seems to embrace their role as "second best" with no desire to take a swing at the performance crown. They surely have no qualms with embracing Nvidia-like practices when it comes to pricing. This is yet another generation of "almost" and "what could have been" from AMD.

  • @thematrixredpill
    @thematrixredpill Год назад +1

    The oncoming technology is in you. Nanotechnology. Ask klaus schwab.

  • @19vangogh94
    @19vangogh94 Год назад +2

    Where did you get numbers for the hypothetical monolithic equivalent at 06:30?

  • @iller3
    @iller3 Год назад +1

    The "future" for AMD GPUs shouldn't even **be** enthusiast gaming. It should be amateur Blender artists and "small business" content creators

    • @nathangamble125
      @nathangamble125 Год назад +2

      Why? The Blender GPU market is much smaller than the gaming GPU market, and AMD is currently much further behind in that segment. It makes more sense for AMD to focus on their strengths and try to take market share, rather than prioritising workstation apps. Vega smashed Pascal in Blender, and it did basically nothing for AMD.

    • @iller3
      @iller3 Год назад

      @@nathangamble125 You were intentionally strawmanning what I said. I said the ENTHUSIAST Pc gamer market. ..which is fk'ing TINY also. You don't get to just grab the entire gaming market that relies on any GPU processing at all, and claim that is the "ENTHUSIAST" market share

  • @50shadesofbeige88
    @50shadesofbeige88 Год назад +1

    Please flatten out your EQ, or use a de-esser.

  • @danielthunder9876
    @danielthunder9876 Год назад +1

    I am glad that they didn't end up topping the 4090. If they did they would have increased the price. I can just about stretch to 1k for the xtx, but 1600 way out of my price range.

  • @scarletspidernz
    @scarletspidernz Год назад +1

    5:36 lol the RTX 2000 series looks so baby now

  • @CyberJedi1
    @CyberJedi1 Год назад

    8K is stupid, period. Especially at smaller screens like 27 to 32 inches screens. 4K is the optimal res and it already has excess pixel density for desktop users, and now with the 4090 we can mostly cap games at ultra settings, 4K 144hz. What happens is that, probably by next gen of gpus, we will hit a performance cap where, a RTX 5090 will run everything you can throw at it at 4k, developers would be able to make almost photorealistic games and it will be able to run smoothly, the 4090 can do that but it still need DLSS 3 frame generation in some games to achieve that high fps in 4k.

  • @marcasswellbmd6922
    @marcasswellbmd6922 Год назад

    It's funny to me when people start talking about a 7950XTX and the 7900XTX hasn't even come out yet.. It's all What If's too.. Of course AMD is going to do a refresh but like 1.5 Years from now..

  • @kaseyboles30
    @kaseyboles30 Год назад +2

    Perhaps two gcd's on interposer and that on a substrate to connect to the mcd's. Not sure the extra complexity won't make it a non-starter, it probably will. Just an idea.

  • @SebastianMikulec
    @SebastianMikulec Год назад

    I think it's too early for 3D V-Cache in GPUs. AMD only just moved to a MCM design with RDNA3, it's a bad idea to introduce 2 new technologies that close together. Work out all the inevitable kinks with MCM on GPUs first, then add a 3D V-Cache version later on in RDNA4. It will make a lot more sense then too, since 8K TVs and monitors are still prohibitively expensive at the moment, but in 2 years or so 8K TVs and monitors should become at least a little bit more affordable. Plus, if RNDA4 has roughly the same perf/watt gain over RDNA3 that RDNA3 promises over RDNA2 then the 8900 XTX might actually be able to run games at 8K without a whole lot of black magic and smoke and mirrors. AMD had to massage the $#!+ out of the numbers to show the 7900 XTX "running at 8K" and 3D V-Cache, while it may help, isn't going to help performance enough to make the 7950 XTX a true 8K card.

  • @williamwoosleyiv6150
    @williamwoosleyiv6150 Год назад

    I honestly think, AMD is out of the GPU race. If they only slightly beat a 4080 and get beat by 30%+ in raster and trounced in RT by a 4090. then I hate to say AMD has no chance next GEN. They are unwilling to build a big enough GPU. And it's sad, 6900xt had a 256 bit bus, but was in the league of 384 bit bus cards. This GEN they are 384 bit bus, and competing with a 256 bit bus card. I don't see anyway in hell they catch up. They should have aimed for 128CU card and it would have been worthy of the 900 series logo or just built a monolithic one. If the X3d cpu dosen't win when it's released, I think amd is in trouble (atleast on the gaming side) lol.

  • @billschauer2240
    @billschauer2240 Год назад

    There have been some comments that state that the bandwidth and/or number of wires to be routed on the substrate means that it is unlikely that a two GCD RDNA3 card will be released (specifically the rumored 2xN32). They base this on an interview given by AMD engineer Sam Naffziger and the answers that he gave to a question. However it is important to listen carefully to what was asked and what he actually said.
    He was asked if chiplets are so efficient why not continue the process and break down the GCD further into many smaller chiplets. Naffziger seems to have taken this to mean why did they not do what they did with Zen 3. He explained that there was more inter-communications required to take an approach like that with a GPU and you would give away much of the benefits of the chiplets if you did. However, since he was thinking about Zen 3 as an example it needs to be remembered that Zen 3 was broken up into as many as 12 chiplets (13 with the IO die). Such a fragmentation of a graphics chip would probably not work. However, the multiple GCD rumored for RDNA 3 is only 2 GCD's and not even 2 N31 die but 2 N32 die with 8 MCDs. This would require massively fewer inter-communications links.
    The one thing that frustrates me is that I have found nowhere where someone asks directly if RDNA3 is architected so that it can use multiple GCDs. If so how many? (I would bet only 2). see ruclips.net/video/8XBFpjM6EIY/видео.html

  • @martinmdl6879
    @martinmdl6879 Год назад

    UR CD Key sent me a key that I installed 6 months later and it did not work. They claim it is a lifetime license but refused to back their product. They blamed the fault on me so I am screwed. Good luck. Anyone pushing this crap gets an "unsub". WE DON"T LIKE BEING RIPPED OFF!

  • @optimalsettings
    @optimalsettings Год назад

    There is just no 7900 physically available. but the crystal ball youtubers are talking about the following generation :-)

  • @ulamss5
    @ulamss5 Год назад

    don't think nvidia would bifurcate the design into monolithic for the top die and chiplet for the rest, that's essentially 2 separate architectures of engineering r&d.
    more likely, nvidia would just keep bumping up the price of the top end, and AMD will continue to roughly match the fps/$ for the "mid-range" and increase margins. nvidia's "mid-range" will continue to have terrible value but still outsell AMD 10:1 as they always have.

  • @donh8833
    @donh8833 Год назад

    I was surprised when AMD initially said the chiplets were going to be based on memory controllers and not gcd compute units. The reason being of the issue with memory coherency. That increases the interchip wiring significantly.
    But having independent mc does allow more independent parallel work loads.
    The clock speed issue is what kill them however. They better pull a 7970 XT type of "you can over clock the heck out of it" type miracle. 60% over a 6950 is cutting it darn close to a 4080. The 4080s are sitting on shelves and the 7900XTX will have worse performance in RT. This is why the 6800XT was trounced in sales by the 3080.

  • @AlexSeesing
    @AlexSeesing Год назад

    I absolutely think you're too negative against both. Sure Nvidia will have issues dropping prices for their silicon since they are so humongous and AMD has chosen a route that cost-effective but far from competitive to be relevant in the real high end, again. It's more likely both companies are at a learning stage how to proceed with this kind of nanotech. It's all like the CPU wars to reach 1GHz first but now they try to reach the 10ms/frame despite the rendering task. It's about getting beyond 100 FP/s at all possible resolutions available to prove their dominance.
    The victor of this generation is the one who provides the most stable frame times at high resolutions as 4K and if even possible, 8K.

  • @gstormcz
    @gstormcz Год назад

    I don't get many of your words like interposer but still very interesting ideas and comments of topic to me.
    Just would like to change your rhytm or tone which hurts me bit due to your acoustic tempo feeling affect my heart rate. Lol.
    Seems maybe weird, but thats it. If that happens, I take pause, but at least it affects lower attention to your technical really interesting talk.
    So you also answered my idea/question, AMD cant make bigger MCM to be economically worthy for gaming consumer production.
    Neither one with more smaller chip lets inside.
    At least with current materials and technology, but concept seems simply genius. Hope it develops in close future. TSMC, ASML, monopoly, increase in wafer price for more advanced nodes.. but fact is these prices usually correspond to latest and demand, as it becomes more usual, pricing goes normal.
    Just hoping gamers wont need compete again with another crypto boom, it could get worse.

  • @jooch_exe
    @jooch_exe Год назад

    You know what you are really paying for? Your paying for a promise that product X will perform the tasks you run on your computer. In that sense, software is always more important. Back in the 90's it was clever software that was able to push the boundaries of technology. I feel that nowadays too much focus (or hope) is put on new hardware.
    I want to give credit to AMD for updating their older GPU's recently, and creating hardware platforms that live so much longer than they should.

  • @matthewIhorn
    @matthewIhorn Год назад

    The problem with this naming convention as proof is that 90's series like a Radeon 7990(an actual gpu) is for dual gpu's and would make sense for a dual GCD's card. Will it happen probably not but as a argument as to why it won't happen is a poor one.

  • @dilteck
    @dilteck Год назад

    I had a PC build in 2019 and that time I used AMD rising 7 2700 with RX 580 graphics card my graphic card i all of a sudden so now my fair is 580 or by or RX 6500 I am not a gamer not a content created I make one or two minutes video once in a while once in two months

  • @denverbasshead
    @denverbasshead Год назад

    Chiplets is all about YIELDS. They don't give a shit about the 10% overhead. It's all that matters. Those little 37mm2 MCDs have a 95%+ yield rate at least

  • @anttimaki8188
    @anttimaki8188 Год назад

    Bleh, im waiting for decently priced 7700 xt or 7600 xt, will see... im still with rx 580, my kid has 3060 and hes happy with it. its mostly about the price/perf

  • @lucassuhadolnik3672
    @lucassuhadolnik3672 Год назад +2

    My favorite card I’ve ever owned was an HD 7950. I’ve been hoping for this naming nomenclature for years 😂

  • @billschauer2240
    @billschauer2240 Год назад

    Coreteks states that there can't be a 2 GCD die version of RDNA 3 because the interposer is made of silicon and it would exceed the reticle limit of the stepper. However in Gamers Nexus interview with AMD engineer Sam Naffziger he states explicitly that the substrate is organic not silicon, so it is not limited to 850 mm2. see ruclips.net/video/8XBFpjM6EIY/видео.html at about 13 minutes. Note Naffziger calls it a substrate not an interposer but he is still talking about the layer that transfers the signals between the GCD and MCD.

  • @celdur4635
    @celdur4635 Год назад

    I think focusing on efficiency is a good thing, BUT, the energy problem will be solved in the energy generation industry not in the microchip industry.
    Since, independent of how efficient chips or any machinery is, we'll just acquire more of them and thus requiere more and more power still.

  • @kaseyboles30
    @kaseyboles30 Год назад +6

    As it is both announced cards look to beat the "4080", though the regular xt version by a just a couple percent. And the XTX should be close in RT (est

    • @ca9inec0mic58
      @ca9inec0mic58 Год назад +3

      not to be passive aggressive but could you suggest a solution to that?

    • @JohnnyWednesday
      @JohnnyWednesday Год назад +6

      OpenCL is far more popular and they can't offer CUDA compatibility because Nvidia refuse to licence it to them.

    • @kaseyboles30
      @kaseyboles30 Год назад +6

      @@ca9inec0mic58 Amd has to spend the money and dev time to compete here. They could develop plugins for software that supports that and spend their own money and time to help other software add in good support for Radeon. That's what Nvidia did. They also need to release quality dev kits and documentation for free online and to the open-source communities. The easier and more advantageous they make it to code for Radeon the better they will do in the long run. I paid way to much for a 3060TI just to have the ability to use certain free/low cost software. I would gladly have used a cheaper Radeon card with likely equivalent or better gaming perf (non-rt at least) if it had been an option. This is actually my first Nvidia gpu in quite some time. My previous gpu's were 5xx series and r9 series. No photogrammetry on those at any reasonable price and I didn't look at unreasonably priced software. And the cheap/free 3d rendering software mostly is pathetic on amd, not because of the hardware, but the software has lackluster support if any.

    • @kaseyboles30
      @kaseyboles30 Год назад +3

      @@JohnnyWednesday Where? I know for academics where every project is roll your own (almost) or other places where much of the software is homebrew or so heavily scripted to be much the same it's popular. I'm talking download and go apps. Things I don't have to spend days coding just to get a rough draft of what I want. Low cost creator apps and the like.

    • @ghoulbuster1
      @ghoulbuster1 Год назад

      Software support is a market share issue.

  • @denverbasshead
    @denverbasshead Год назад

    Are you still sticking with your 6% uplift for AIB cards? Hahahahaahahhaha. Your RDNA3 analysis has been terrible

  • @ShaneMcGrath.
    @ShaneMcGrath. Год назад

    I think it's going to be more like their chiplet cpu launch, First generation is ok but a bit meh, Second attempt opens everyone's eyes, Third one is like WOW.

  • @_A.d.G_
    @_A.d.G_ Год назад

    Am I wrong, or in the pro range there already is a dual-GCD card? Is that so much more expensive? I have my doubts.

  • @defectiveclone8450
    @defectiveclone8450 Год назад +1

    They could be making a larger chip because the 7900XTX itself was not ready yet i think its all they had and needed to bring something to the market.. even if its a week into december

  • @MoraFermi
    @MoraFermi Год назад

    Interposers themselves don't have to be monolithic! Intel's EMIB is a thing. TSMC is likely to have an equivalent at this point.

  • @tuckerhiggins4336
    @tuckerhiggins4336 Год назад

    I am thinking Rdna 4 is where they will win. It is all managing expectations. Why would Amd outright win on first attempt?

  • @elysiumcore
    @elysiumcore Год назад

    4k / 60 will be the target for a long time - nobody cares about 8k - Tvs are out ..yet no content. Videos, movies, games etc

  • @mm-yt8sf
    @mm-yt8sf Год назад

    the consonants are very strong. since i don't actively follow audio stuff, it made me wonder if that's what pop filters are for? it also felt like everything was done in one breath but at least it was strong the whole way through and not like some speech where it sounds like the speaker ran out of air and the sound trails off. rather it was more like a nonstop action movie. 🙂

  • @clarkisaac6372
    @clarkisaac6372 Год назад

    Yes, it's coming and also, RX 7990 XT is a king by team red to ensure crush RTX 4090.

  • @LakerTriangle
    @LakerTriangle Год назад

    Smart Access Memory with 7000 series CPU & GPU might be close to 4090 performance.

  • @angrygoldfish
    @angrygoldfish Год назад

    Please try to sort out your microphone popping and audio issues. I personally cannot listen any more to your content, no matter what speaker I use. I'm not the only one who finds it unbearable.

  • @Thor_Asgard_
    @Thor_Asgard_ Год назад

    Nvidia realized, that by immensely lowering the generational progress on the low and midend, they can push more ppl into the more greedy products.

  • @JamesFox1
    @JamesFox1 Год назад

    7950 X3D / 7950xtx3D ??? i think that Would BLOW Nvidia Outta The water !!!!

  • @MetroidChild
    @MetroidChild Год назад

    The limit of the fan-out style interposer AMD uses (which TSMC developed with Broadcom) is actually twice the normal area, around 1700mm^2 (stitching two 33x52mm reticles into a single 33x26mm interposer after looking it up). Adding two SMs and then giving all of them 50% more shaders (double the shaders over all) gets the die space to around ~500mm^2, a hell of a lot better than the 750-800mm^2 a monolithic design of equal proportions would need, and better yielding as well. It doesn't mean AMD is working on such a product, but the framework is laid out for them to do so, and interposers are infinitely cheaper than logic.

    • @MetroidChild
      @MetroidChild Год назад

      A fan-out interposer of that size could just barely support packaging the current EPYC offerings, but given how large the Genoa PCB is that won't stay true for very long, which probably means AMD will keep the current CPU strategy.

  • @ThirAilith
    @ThirAilith Год назад +1

    I´m only really curious about the pricing of incoming AMD GPUs (especially the 7700 XT or XTX, whatever the naming will be). Atm u can buy RX 6900 XTs in sale for only 699€ in Germany. While I know many will buy highend GPUs no matter how much they cost, I would like to see AMD focusing on better price-to-performance GPUs normal custormers and gamers can actually buy for a good price and not what Nvidia is doing since a while now. If there´s a good performance uplift for a reasonable price combined with more than 8 gb VRAM I´m willing to retire my 5700 XT next year.

    • @elysian3623
      @elysian3623 Год назад +1

      Unfortunately, AMD have played that game for years and all it got them was laughed at and dubbed as inferior, even now, if the 7900 doesn't kill the 4090 they will be shunned to be the 2nd choice nobody wanted, I personally find it a sad state of play because the only way Amd seem to be getting the attention they deserve is because the 4080 is horrible value for money and most people can't see themselves dropping the best part of 2 grand for them to just cpu bottleneck, I'm excited to see what their new design and improvement bring, because compared to nvidia they've not added anywhere near as many shader cores but have gotten a 2.57x tflop gain from 16% extra, it's going to be interesting at least

  • @FreakyDudeEx
    @FreakyDudeEx Год назад

    well technically it is a multi gpu on 1 card.... and they did achieve at least an old dream of multi gpu working as 1 gpu.....

  • @UncannySense
    @UncannySense Год назад

    surely if such a gpu is in works AMD should reinstate the 7990xt moniker.

  • @BogdanTestsSoftware
    @BogdanTestsSoftware Год назад +1

    1) Will there be the ability to run TPU/Instinct compute tasks on mainstream GPUs, a CUDA alternative for students and developer wanna-bes? 2) Will 3D cache make into GPUs/ has it already?

    • @PineyJustice
      @PineyJustice Год назад +1

      There already is and has been a CUDA alternative, there has been for a very long time. Opencl / ROCm etc

    • @BogdanTestsSoftware
      @BogdanTestsSoftware Год назад

      @@PineyJustice Yes and will ROCm support any 7900 card? I don't need raytracing, I want to learn machine learning

  • @mrsasshole
    @mrsasshole Год назад +2

    Wish I had the time or inclination to address all of the strange suppositions and projections in this video. Not sure if the real corteks has been hijacked and replaced by a feeble minded copy, or if there's been some kind of head injury involved here. Regardless, the last couple of videos are head scratchers and seem to be wildly detached from reality.

  • @michahojwa8132
    @michahojwa8132 Год назад +1

    What about potential extra 10% perf from fixing drivers (top examples) - should we also forget about those? Nice +10% +5% OC (if possible at some point) could've jumped above 4090 stock raster perf and that would at least make good news headline.

    • @LaskaiTamas23
      @LaskaiTamas23 Год назад +1

      What are you talking about? Drivers are fine and the new cards are not even out lol how do you know drivers are gonna hold back the performance?

    • @michahojwa8132
      @michahojwa8132 Год назад

      @@LaskaiTamas23 Red gaming tech, MLID, not an apple fan - they need to make a lot of changes for mcm. Not only that, I think the perf can be a lot lower on some games, there might be big latency and there might be lower perf in 1080. General consensus is new drivers need over an year to mature.

  • @dcompart
    @dcompart Год назад

    URCDKeys is too unreputable for payment processors to accept. Had to go elsewhere.

  • @Dark88Dragon
    @Dark88Dragon Год назад

    That 7950XTX would be pure madness, bye bye 4090 you would have to say then

  • @MozartificeR
    @MozartificeR Год назад

    Also, with hopper, NvIdia have tech down the pipeline, that makes chiplets work faster.

  • @spinkey4842
    @spinkey4842 Год назад

    either way there's no denying just how far amd has come in the past 5-6 years in cpu and gpu performance

  • @sreif78
    @sreif78 Год назад

    Thank you for breaking some of this down. Good to have a better understanding of the line up.

  • @Zorro33313
    @Zorro33313 Год назад

    Wasn't TSMC developing a 1700mm2 interposer few years ago?

  • @JamesFox1
    @JamesFox1 Год назад

    @Coreteks , Wrong = Intel Is Indeed Using HBM on Their 13 series CPU`s = YES = ON Package , need to look Again
    { Intel Announces The Worlds First x86 CPU With HBM Memory }
    the Intel Xeon CPU Max Series
    in fact look at Your Own Listing = Where do we go from here

  • @donh8833
    @donh8833 Год назад

    If AMD cannot beat the 4080 at $1000, they will be crucified.

  • @Kemano2023
    @Kemano2023 Год назад +1

    The possible reason why the introduced cards are named 7900___ can be numerous. There are rumors that claim that Navi31 missed the frequency target and needs a respin.
    Regardless if that true or not, the cards could have been pushed down a tier while keeping the name for AMD to get away with the price increase.
    RX 6800XT and and 6800 were 2nd and 3rd cards in the row (built on navi21) Either the respin rumour is true or there is a 7950XTX that's equipped with v-cache the 7900XT and XTX are 2nd and 3rd rank cards at best that are successors of 6800XT ($649) and 6800 ($579) which makes the pricing of the 7900xT/X cards not very compelling (more like Nvidia-like)
    The least I would expect 7950XTX to be is a 2 GCD (N32) + 4MCD + v-cache but I can imagine a 2xGCD (N31) 6MCD + v-cache variant too. I'm a visionary, I won't be said if that doesn't come true.

    • @nathangamble125
      @nathangamble125 Год назад +2

      "the 7900XT and XTX are 2nd and 3rd rank cards at best that are successors of 6800XT ($649) and 6800 ($579)"
      This makes no sense. The 7900 XTX is a full Navi 31 die, and is therefore the successor of the RX 6900 XT, which is a full Navi 21 die.

    • @Kemano2023
      @Kemano2023 Год назад

      @@nathangamble125 That depends your point of view. How do you define "full" in the world of chiplets? We know that each MCD is capable of receiving V-cache. (even multiple high)
      The best AMD could build from Navi 21 (not counting the late refresh) was the 6900XT. It's successor should also be the best AMD can bould today. Currently the best AMD could build is Navi31 with 6 MCD with v-cache on top of each, therefore Navi31 with 6MCD without v-cache is 2nd tier at best. (Successor of 6800XT)
      If you really believe that the successor of the 6900XT build on a full navi21 is a 7900XTX which is a "full" navi31 and v-cache is "extra" then you have fallen victim of their marketing strategy.
      My theory can only be proven wring by time. If AMD releases Radeons with v-cache only after about a year as a "refresh", then it's a different story.

  • @tim9605
    @tim9605 Год назад

    imagine trying to sell face masks and the want people to think you are smart.

  • @MozartificeR
    @MozartificeR Год назад

    What are the percentage of apps that are cpu bound, and the percentage of apps that are gpu bound?

  • @kaisersolo76
    @kaisersolo76 Год назад

    Good video. But I think your cache performance is out

  • @JelliedInfant
    @JelliedInfant Год назад

    Pretty soon you'll need a separate case and power supply to run GPUs.

  • @nowherebrain
    @nowherebrain Год назад

    I have to finish the video but... I don't think two gcd's are the way to go, unless they are smaller and lower power..instead I see asics and other coprocessing dies being useful in this case, much like the mcds...maybe ai accelerator die of ray intersect die...much smaller closely packed dies...or two gcd's but different workloads...one for pure raster and shader and other for raytracing and optimizations and scaling tech......this would in turn require either a third die..coupling them to the output like the IO die for ryzen...I'm very curious to see what they do...adding more dies will increase power consumption drastically...

  • @CrackaSlapYa
    @CrackaSlapYa Год назад

    STILL wont beat my 4090. Why wait for that?

  • @Deznere
    @Deznere Год назад

    7900/50 XTX with 3D VCache would honestly match or beat the 4090, especially when paired with the upcoming 7000 series 3D CPUs.

  • @Dayanto
    @Dayanto Год назад

    I think you're reading too much into their "8K gaming" messaging. RDNA3 will be weaker than Lovelace full stop. Their 8K slides were not about performance but display bandwidth. If you have a bleeding edge high end display, you can't always make use of the 4090's performance since it uses outdated display standards. With RDNA3, you could at least theoretically do it (even if it requires turning down settings).
    This isn't some grand strategy, it's just another checkmark feature that might appeal to a certain minority of gamers. It's a nice bonus to consider if you're already into that stuff, but that's about it. It doesn't make a difference for anyone else.

  • @anslicht4487
    @anslicht4487 Год назад

    I hope nvidia gets creamed this time, the 10 series was the best ever

  • @bbbl67
    @bbbl67 Год назад

    Even if a 3D Vcache may not gain AMD much performance, can an entire compute core on a 3D interposer help out? Not much room left on the 2D interposer, but lots of room left on a 3D interposer?

    • @spankeyfish
      @spankeyfish Год назад +1

      Stacking logic chips will create serious heat rejection issues.

    • @bbbl67
      @bbbl67 Год назад

      @@spankeyfish Well, that's true, but SRAM is also pretty heat intensive, and they've been able to stack that in the form of 3D Infinity Cache.