AMD RDNA3 GPU Architecture Deep-Dive: 7900 XTX Drivers, Rasterization, & Ray Tracing

Поделиться
HTML-код
  • Опубликовано: 28 янв 2025

Комментарии • 1,2 тыс.

  • @GamersNexus
    @GamersNexus  2 года назад +544

    As of a few days ago, there are a TON of new bot spam comments on YT. They copy/paste comments from others and have bot upvotes. Don't click their channels - all spam!
    Watch our video about physical card design from previously! ruclips.net/video/8eSzDVavC-U/видео.html
    We'll do a tear-down as soon as we have one we can take apart!

    • @RanjakarPatel
      @RanjakarPatel 2 года назад +7

      sorry my dear. no be sadness. but you no have strong understand four gpu. amd make no good after koduri leaving. my uncle raja koduri making four amd but they forgetting four him. please make video four him celebrashin. u are try your best and four this i have proudness. but please cut hair so i no make confushin if i liking ur physeeq. xcelent four trying and four this i am gr8ful. EDITING: What is this bot? I am indian and very proudness. No be race against me Stefen! You bad man with two much hairs and very arrogance! What india do four you. we are number one four it and computer. no nice. no nice!

    • @RashakantBhattachana
      @RashakantBhattachana 2 года назад +6

      @@RanjakarPatel वह महान थे।

    • @RashakantBhattachana
      @RashakantBhattachana 2 года назад +6

      विष्णु उसे आशीर्वाद देंगे।

    • @CaptainShiny5000
      @CaptainShiny5000 2 года назад +3

      Isn't Crucial DDR5 kinda terrible atm compared to the competition? Buildzoid mentioned that in a couple of his videos. I'd prefer to see actual good products being promoted in sponsorships, not ones that are merely mediocre at best.

    • @goofball1_134
      @goofball1_134 2 года назад +3

      when will you guys be able to post benchmarks? a few days before launch?

  • @notabiggfan
    @notabiggfan 2 года назад +1828

    My heart dropped, thought the review embargo was lifted.

    • @RexZShadow
      @RexZShadow 2 года назад +182

      Ikr, can we just get our fucking review already.

    • @GamersNexus
      @GamersNexus  2 года назад +845

      @@RexZShadow We aren't in control of that timing

    • @professorchaos9171
      @professorchaos9171 2 года назад +41

      I would have cried actual tears.

    • @RexZShadow
      @RexZShadow 2 года назад +107

      @@GamersNexus about to hibernate until the embargo lifts XD

    • @757Bricksquad
      @757Bricksquad 2 года назад +28

      @@GamersNexus any idea when the embargo lifts?

  • @Jsteeezz
    @Jsteeezz 2 года назад +379

    I love your architecture deep dives. Feels like I was watching the Turing block diagram breakdown recently, cant believe that was 2018. Time flies. Thanks for the high quality content, as you maintained your quality and integrity even with a growing subscriber count.

    • @GamersNexus
      @GamersNexus  2 года назад +76

      Wow! It's been a while since that one!

    • @Jsteeezz
      @Jsteeezz 2 года назад +13

      @@GamersNexus yup. Just checked and youtube says it was 4 years ago with 73k views. You will probably get that many views in a day or two for this video. Pretty crazy.😊

    • @dylanherron3963
      @dylanherron3963 2 года назад +6

      @@Jsteeezz 13k in 21 minutes lmao

    • @metalmaniac788
      @metalmaniac788 2 года назад +1

      33k at an hour

    • @nexxusty
      @nexxusty 2 года назад +2

      @@metalmaniac788 47k in that same hour.

  • @deek_online871
    @deek_online871 2 года назад +27

    “You come here for this kind of depth. The I/O die has I/O”
    Thank you Steve, this got me good.

  • @Condor_
    @Condor_ 2 года назад +39

    Steve, I have to be honest here in saying that about 70-80%~ of this information sort of just goes straight over my head. _However_ you still present this stuff fast enough and interesting enough that I enjoy listening to it all anyways because I know I'll learn at least *something* here, and because this tech is just super fascinating to keep up with. Your analogy with the coaster (which I have by the way, super good coaster) and the GPU (which I don't have by the way) was a good and simple one that I felt got the point across nicely!

    • @MrAxelStone
      @MrAxelStone Год назад

      Same here....i hear him talking about stuff I have no clue about and I feel like that gif of mathematical equations floating around as someone looks confused lol.

  • @SudoYETI
    @SudoYETI 2 года назад +36

    This is why I love GNs. I watch a lot of other YT Tech channels for the high level overview but absolutely love the deep dives like this from GN. Even if I don't fully understand everything Steve and GN do a great job of explaining everything.

  • @alexmills1329
    @alexmills1329 2 года назад +519

    It’s going to be really interesting to see the day we start ‘gluing’ GCD’s together like Ryzen does CPU cores, hopefully it’s not much more than one more generation off.

    • @GamersNexus
      @GamersNexus  2 года назад +183

      Seems like it'll happen eventually!

    • @tuckerhiggins4336
      @tuckerhiggins4336 2 года назад +11

      In GN's engineer video, Amd said that wasn't practical or something like that

    • @MrHamof
      @MrHamof 2 года назад +100

      @@tuckerhiggins4336 They said the interconnect isn't fast enough to let the GCDs co-operate properly, so it'd be a bit like SLI or Crossfire where they end up getting in each others way more than co-operating. But if they can solve that problem and get the interconnect speed up to where two GCDs can basically act like one GCD, it would allow for them to use multi GCD GPUs.

    • @tuckerhiggins4336
      @tuckerhiggins4336 2 года назад +12

      @@MrHamof I think I heard a number needed somewhere for the interconnect speed needed, something absurd like 7tb/s. Who knows

    • @crazybeatrice4555
      @crazybeatrice4555 2 года назад +30

      @@tuckerhiggins4336 Isn't their current MCD to GCD bandwidth like 5TB/s, though. Doesn't seem like the far future.

  • @st3althyone
    @st3althyone 2 года назад +33

    Thanks for another loaded release, Steve. As always, you guys cover everything we need to stay informed in an easily accessible format. Thanks for always keeping it real. Thanks to all your team for continuing to bring us only the best.

  • @matasa7463
    @matasa7463 2 года назад +174

    I really wish EVGA would consider making some AMD cards. It would be so amazing to see their production lines preserved and GPUs of their quality remain in the market...

    • @infernaldaedra
      @infernaldaedra 2 года назад +6

      Ik if they aren't making NVIDIA It would be awesome I've always wanted a AMD EVGA card.

    • @golden1199
      @golden1199 2 года назад +16

      They wouldn't survive the Sapphire and PowerColor market I don't think, it's probably why they don't

    • @vedran5582
      @vedran5582 2 года назад +18

      Unfortunately, AMD card market is smaller, so they would only get to keep a fraction of their production, but I'd love to see them join - on the other hand, I feel like AMD has more really good exclusive AIBs (like XFX, Sapphire and PowerColor), so they would have a fair bit of hard competition.

    • @arfianwismiga5912
      @arfianwismiga5912 2 года назад +8

      maybe intel should consider to hire evga as a new brand partner

    • @thomasfischer9259
      @thomasfischer9259 2 года назад +7

      @@arfianwismiga5912 Intel is going to leave the GPU market in a year or so anyway.

  • @hugloobugloo
    @hugloobugloo 2 года назад +18

    A correction, culling is mostly for backface primitives, i.e. triangles that are facing away from the camera since that is a very quick check and easy to remove from the pipeline using fixed function blocks. The z-occlusion/buffering that is being described in the video is much more complex and usually isn't classified in the culling, since primitives can be partially visible among other things and thus can't be dropped from the pipeline.

  • @toonnut1
    @toonnut1 2 года назад +42

    Nobody goes into such detail as GN thanks bro!

    • @infernaldaedra
      @infernaldaedra 2 года назад +2

      Even back when they only did video game reviews GN was litt

  • @niks660097
    @niks660097 2 года назад +503

    3.5TB of bandwidth is no joke, it would mean 2nd gen infinity cache can send 3500 Bytes/ns(nanosecond), that would make traditional rasterization monstrously fast, rasterizing billions of triangles in micro-seconds..

    • @niks660097
      @niks660097 2 года назад +71

      @ラテちゃん they were doing that too, but rdna 3 is much better in ignoring triangles much early in the pipeline, anyway mesh shaders and modern engines already have good culling setup in software, rather than to rely on the hardware..

    • @JeremiahBostwick
      @JeremiahBostwick 2 года назад +53

      @@user-xl7ns4zt6z Culling has existed for a very long time. Even the N64 was capable of culling, I read an entire article talking about it in Nintendo Power back in 1996.
      It's just that applying it to ray-tracing is being more and more optimized. For rasterization, culling is more or less completely optimized/mature.

    • @JorgetePanete
      @JorgetePanete 2 года назад +8

      You usually need to compute only about 1.5 million triangles in native 1080p per frame if using Nanite or its equivalents, I don't know how much resources Lumen needs, though.

    • @photonboy999
      @photonboy999 2 года назад +2

      It's ALWAYS a balancing act.
      Even when you compare two cards where ONLY the video memory bandwidth is higher then the tradeoff is that it costs more. But in general you balance everything. And the SOFTWARE often has to make use of certain hardware changes. I remember people getting excited for AMD culling years ago but most games didn't make use of it at the time so we got a nice DEMO at the time and that was it.

    • @yellowflash511
      @yellowflash511 2 года назад +14

      It's 5.3TB per second not 3.5 and yes to everything else

  • @Walczyk
    @Walczyk 2 года назад +58

    Occlusion and culling is fascinating! John Carmack was absolutely genius in this respect, it was the single reason doom was able to run decently on 486 cpus. He used a binary tree method to find what needed to be drawn for every possible position and view point

    • @jongeorge3358
      @jongeorge3358 2 года назад +9

      close! it was a tree, but not a binary tree, it used the mini max algorithm

    • @emperorSbraz
      @emperorSbraz 2 года назад +10

      Well the real game changer was the shotgun and of course later the super shotgun but ok
      :)

    • @o0Donuts0o
      @o0Donuts0o 2 года назад +8

      Wasn’t it called a BSP tree (Binary Space Partitioning)?

    • @infernaldaedra
      @infernaldaedra 2 года назад

      John Carmack also is a special kind of a dirtbag for how he Bailed on Bethesda/IdSoftware/Zenimax and ran to Facebook with stolen engineering documents asking for $ and to become the lead of Oculus.

    • @luca6819
      @luca6819 2 года назад +2

      It will run well in high detail and full screen even on 386 machines, if you had a fast enough VGA card!

  • @DKTD23
    @DKTD23 2 года назад +25

    Thanks for a great breakdown! My Modmat arrived around Thanksgiving. Very happy that I could support the chan finally!

  • @Sunlight91
    @Sunlight91 2 года назад +114

    What I take away is AMD will keep supporting and optimizing games for Infinity cache. So RX 6000 GPUs won't fall far behind in newer games.

    • @ChiquitaSpeaks
      @ChiquitaSpeaks 2 года назад +1

      Infinity cache seems really important in the future of SSD based games

    • @DragonOfTheMortalKombat
      @DragonOfTheMortalKombat 2 года назад +28

      If that happens, RX 6000 series will age pretty good, unlike rtx 3000

    • @egalanos
      @egalanos 2 года назад +4

      The cache is transparent to the application so not sure what you are thinking they would optimise for?

    • @DuBstep115
      @DuBstep115 2 года назад

      @@DragonOfTheMortalKombat doubt, games will drag behind because of consoles. We will have to wait for PS6 to move forward in graphics

    • @johnscaramis2515
      @johnscaramis2515 2 года назад +4

      @@egalanos You can e.g. optimize how and how much data is transferred from GPU memory to the GPU. Same as for CPUs.
      The problems are latencies, if you read from a memory block it needs to be adressed etc. etc. (look at the timings you can adjust for DDR RAM) and afterwards it needs a few cycles to be ready again. So you are trying to transfer as much data as possible in one go.
      If you are transferring too small chunks, although you have a bigger local cache, you lose performance. If you transfer too much, the cache cannot take up all data.
      And not sure about caches in GPUs, but in CPUs you can mark certain cache content to remain in the cache so it will not be removed.

  • @fridaycaliforniaa236
    @fridaycaliforniaa236 2 года назад +12

    The amount of details in your videos is just insane ! Love this channel 🥰

  • @iamwham
    @iamwham 2 года назад +26

    I’m a software developer working on AI with interest in GPU architecture. Also an avid gamer. I love this content, keep it coming!

    • @aravindpallippara1577
      @aravindpallippara1577 2 года назад

      What is the stack you are usually working with?
      Pytorch compatibility layer to tensor or direct CUDA or other stacks

    • @infernaldaedra
      @infernaldaedra 2 года назад

      @@aravindpallippara1577 probably cuda because NVIDIA pays so many software companies to develop for their own compute.

    • @iamwham
      @iamwham 2 года назад +2

      PyTorch and tensorflow. We are moving to the onnx open framework for inferencing. We had to do a lot of running on CPU cores because of the GPU shortage, so no big Nvidia specific optimizations but it’s relevant to mention that AMD isn’t in the AI game at all. I work more on the distributed systems architecture in our group.

    • @aravindpallippara1577
      @aravindpallippara1577 2 года назад

      @@iamwham amd does have an onnx end point for their gpus - it's not very performant but it does work
      I always wanted to work with distributed systems - oh well

    • @namelesslad5803
      @namelesslad5803 2 года назад

      @@infernaldaedra They sent their engineers to help. You say it like Nvidia paid for exclusivity.
      AMD could've done the same but pre-Lisa Su they didn't have someone with a good vision

  • @JBColourisation
    @JBColourisation 2 года назад +8

    I'm really excited to see what Smart Access Video is actually capable of!

  • @ProjectPhysX
    @ProjectPhysX 2 года назад +43

    What's missing in the video is the new VLIW2 ALUs, with 4 FP32 Flops/cycle instead of 2 like literally all other GPUs from all vendors. I'm really curious how this will perform in OpenCL compute.

    • @hishnash
      @hishnash 2 года назад +8

      It will depend a lot on your workload. Most of the time gpus end up been bottlenecked on things other than raw ALU compute but rather stuff like not having enough registers to run everything at once or being stalled on memory.

    • @infernaldaedra
      @infernaldaedra 2 года назад +3

      Definitely going to be keeping an eye on benchmarks

    • @organichand-pickedfree-ran1463
      @organichand-pickedfree-ran1463 2 года назад +3

      Isn’t that just matching nvidia for fp32? Or what am I missing here?

    • @ProjectPhysX
      @ProjectPhysX 2 года назад

      @@organichand-pickedfree-ran1463 yes, both doubled FP32 throughput

  • @heni63
    @heni63 2 года назад

    So cool to mark the area you are talking about (usually listen to your videos as kind of podcast so couldn't notice before, sorry)

  • @opinali
    @opinali 2 года назад +11

    That was a great summary/recap of all info released so far, thanks! Just hoping that AMD will release an updated RDNA architecture whitepaper. As an engineer, I confess that I have a significant bias recently for AMD's CPUs and GPUs because I consider them innovative, elegant designs with well-balanced priorities and great execution. Competitors have their own strong points, from Intel with superior multithreaded scaling to NVidia with still-undisputed lead in RT & ML, but honestly I think AMD's offering are a much better fit for the vast majority of users's real-world needs.

  • @Maed54
    @Maed54 2 года назад

    Merci pour cette chouette vidéo, comme de coutume. C'est toujours un plaisir de passer par ici ;)
    (Trop la flemme d'essayer de commenter en anglais)

  • @tuhaggis
    @tuhaggis 2 года назад +32

    If the USB-C implementation is the same as RDNA2, you can plug storage and other devices into it too.

  • @MrJaysniping
    @MrJaysniping 2 года назад

    bro never stop what you do, this is so much useful info to the average gamer/pc enthusiast i have been watching since i built my first pc in 2010 and not only did the info from this channel help me pick the best parts for price to this very day, it also showed me how to overclock/effectively use them to get the best out of em while averaging nice low consistent temps and insane clocks aswell as keeping all my components nice and dependable so ty man!

  • @MrEdispilf
    @MrEdispilf 2 года назад +49

    Type-C on the card is great for artists and anyone who uses a graphics tablet! They often use HDMI if there is no Thunderbolt or alt-display type C available, so for anyone who draws, it basically frees up the HDMI, and/or means you can skip a thunderbolt motherboard. Needing/wanting a thunderbolt compatible mobo really limits options.
    Specifically for me, my "good monitor" doesn't support advanced color spaces on displayport, only HDMI, so a type-c on the graphics card lets me plug in my drawing tablet, and use the HDMI for reference/preview. Or watching movies and stuff lol. I've been looking at the thunderbolt compatible motherboards, and some of the USB 4 options, but they're often much more expensive, rarely available in my preferred matx, and, I quite like my current motherboard.

    • @el-danihasbiarta1200
      @el-danihasbiarta1200 2 года назад +6

      We need more usb c port on gpu..just want some simple cable from pc to monitor and make a monitor as usb hub

    • @hishnash
      @hishnash 2 года назад +3

      Yer many high end monitors (for artist) end up benefiting from USB-C with display port tunnelling. One reason for this is these monitors also want to be useable by Mac owners and these users expect single cable connections with TB/USB-C

    • @infernaldaedra
      @infernaldaedra 2 года назад

      You kind of just convinced me to get a drawing tablet just now lol

    • @infernaldaedra
      @infernaldaedra 2 года назад

      @@el-danihasbiarta1200 I have a idea already for how to do that, So many GPUs are already 2-4 slots so if a 3 slot card could easily fit more type c connections, but it would make gous even larger because each type c connection has to provide a wattage output unless it's functioning as a display only

    • @carlkidd752
      @carlkidd752 2 года назад

      I went with the aorus master x570 as it has a thunderbolt 3 header.

  • @waldojim42
    @waldojim42 2 года назад +6

    It would be nice to see not just performance reviews when these launch, but quality as well. Given FSR, audio sound suppression, real time encoding, etc are all selling points, having that compared to the nvidia dlss, broadcast, etc features could be interesting. Probably not the same video, to be sure, but something of an in depth look at how effective the tech is would be great.

  • @haywagonbmwe46touring54
    @haywagonbmwe46touring54 2 года назад +34

    Content!
    Kinda stoked on how Navi 40 and beyond will shake out.. chiplet gpus are neat.

    • @infernaldaedra
      @infernaldaedra 2 года назад +2

      I imagine if they can get the architecture design to continue to scale and improve density. It will eventually take over servers at high densities, This kind of technology can potentially push the whole industry forward where we might see products like the Nvidia DG-X become obsolete and instead see things like racks, blades, or system that have huge bays of these kind of smaller but intricately Integrated GPU designs

    • @haywagonbmwe46touring54
      @haywagonbmwe46touring54 2 года назад

      @@infernaldaedra We can't shrink processes on silicon for ever. This may be the work around until a new paradigm in IC technology is rolled out. Bring on the photons!

  • @marstedt
    @marstedt 2 года назад +2

    Thanks Steve. ... and thanks to the entire GN crew.

  • @Psychx_
    @Psychx_ 2 года назад +6

    I just got a good deal on a used 6800XT incl. waterblock. Couldn't be happier, although I must admit that the architectural advances of RDNA3 sound quite juicy :P

  • @WIImotionmasher
    @WIImotionmasher 2 года назад +3

    I'd like to add
    I am glad they are advertising the 8-pin. I know it seems silly, but the 12-pin represents a trend in modern PCs that I don't like, power draw getting so big that they need to make new connectors. Seeing the normal connectors tells me their GPUs wont draw ridiculous power and I much prefer that. Both for space/thermal performance and just flatout energy usage.
    I do care about power usage on my future PCs now.

    • @EarthIsFlat456
      @EarthIsFlat456 2 года назад +1

      The RTX40 series draw significantly less power than the 30 series in actual gaming but most people are still misinformed.

    • @sammiller6631
      @sammiller6631 2 года назад

      @@EarthIsFlat456 The RTX40 series still draws significant power. That's why they catch fire.

    • @Safetytrousers
      @Safetytrousers 2 года назад

      @@sammiller6631 No 4090 has caught fire, and the reason for any melting is an improper connection.

  • @domm6812
    @domm6812 2 года назад +4

    This is a fantastic deep dive. Very interesting, and very well explained to a person who isn't an engineer.

  • @ocudagledam
    @ocudagledam 2 года назад

    A note on the out of order execution explanation given at 9:00 What Steve describes is actually pipelining (starting the next instruction before the previous one has finished). Out of order execution on the other hand is exactly what it says - executing instructions in an order that's different than the one given in the code. A modern processor has a multitude of execution units for different kinds of instructions (or sometimes it may have more than one unit for the same type). If an instruction will use an execution that's currently free, and that instruction does not depend on the results of any part of code ahead of it that still hasn't been executed, you can start that instruction ahead of its order in the code. This increases the overall utilization of different execution units inside the processors and thus raises the number of instructions being executed per clock.

  • @AspenTitan
    @AspenTitan 2 года назад +24

    Always love these, really teaches a lot about the hardware and it’s little intricacies. Thanks Steve and the team at Gamer’s Nexus!

  • @WolfsFriend42
    @WolfsFriend42 2 года назад +6

    I totally love how you explain things and how deep you routinely dive into whichever topic you are covering. Us true Nerds thank you. Keep up the great work. 🐺💜👼

  • @peterjansen4826
    @peterjansen4826 2 года назад +5

    Understanding your lack of knowledge on a particular subject (in this case CPU/GPU-architecture) is the sign of getting enough knowledge to understand your shortcomings. A good thing. It happens with any subject or skill at some time if you progress. I studied some computerarchitecture in subjects as part of my eductaion but not enough to understand all the details.

    • @infernaldaedra
      @infernaldaedra 2 года назад +1

      Definitely a good trait especially understanding that processors are designed by teams of hundreds of people not just one person, there are too many technologies and complexities in such a small thing that there isn't even close to enough time in a day to talk about everything that goes into a modern X86cpu or graphics card.

    • @sammiller6631
      @sammiller6631 2 года назад

      @@1st_DiamondHog Its called Illusory superiority. Dunning Kruger is a narrow subset.

  • @JayTsPhoto
    @JayTsPhoto 2 года назад +1

    Smart access video is actually really exciting and interesting. Alot of cool technologies here

  • @0s0sXD
    @0s0sXD 2 года назад +7

    I'm seriously incredibly grateful for this. I absolutely love this. Thank you game's nexus

  • @t0mn8r35
    @t0mn8r35 2 года назад +1

    Very interesting and informative as always. Insert the "Thank you Steve" meme here.

  • @ruisilva4317
    @ruisilva4317 2 года назад +32

    I switched to AMD because they seem to align with the way I see things, instead of throwing a big die and being wasteful they're more concerned about proper resource usage and optimization, the fun part is despite nVidia throwing all the money at it, AMD still manages to somewhat compete and beat specially at normal prices

    • @helloguy8934
      @helloguy8934 2 года назад +1

      Except that amd loses pretty bad in rt

    • @Mark-kr5go
      @Mark-kr5go 2 года назад +11

      @@helloguy8934 I don't really get the obsession with rt when most people are still using old-ass gpus that can't hit 60fps ultra on new games.

    • @TheDravic
      @TheDravic 2 года назад +2

      @@Mark-kr5go These are not old GPUs, these are new GPUs. Why are you bringing old GPUs into the conversation about how new GPUs from AMD are slower at ray tracing than Nvidia's GPUs?
      Obviously if you want to keep playing Team Fortress 2 you don't really need to worry about buying anything newer than 2012.

    • @the0000alex0000
      @the0000alex0000 2 года назад +2

      @@Mark-kr5go maybe consider not being salty when people enjoy good performance on RT like every AMD fanboy out there.

    • @hman6159
      @hman6159 2 года назад

      @@the0000alex0000 well considering the 7900 xtx barely loses to the 4080 in raytracing tasks (between 3 percent and 15 percent) it should be able to do raytracing perfectly fine

  • @michaelrothe7804
    @michaelrothe7804 2 года назад

    I love these videos that you do Steve and GN crew!

  • @KC-nd7nt
    @KC-nd7nt 2 года назад +3

    Video released at 7 minutes ago
    Content 37 minutes
    People liking before they watch
    100% like
    Your on top of your game Steve .
    Well done to all the staff @ gamers nexus

    • @MrClewis97
      @MrClewis97 2 года назад

      They just know it will be the best content available on the subject

  • @alistairblaire6001
    @alistairblaire6001 2 года назад +2

    Great stuff, I'm excited for the reviews.

  • @wizardoflizards965
    @wizardoflizards965 2 года назад +4

    Lol the hammer just resting on the gpu on the table 6:00 😂

    • @lightmanek
      @lightmanek 2 года назад

      It's a metaphor for RDNA3 hammering what looks like 4080 card ...

    • @RafaleC77th
      @RafaleC77th 2 года назад

      It was super ominous LOL. Yes folks was nervous for sure.

  • @fcfdroid
    @fcfdroid 2 года назад

    😂*THIS IS JUST IN!* i/o means i/o. Thanks for that cunning detailed well thought out description Steve! Now back to our sponsor where Sir-Oblivious will get us acquainted with the obvious 😉

  • @sff.f
    @sff.f 2 года назад +6

    22:32 the USB-C is also a DP 2.1 ..! More not less functionality! My 6900XT had one and with a single cable I was able to power and give signal to a mobile 17" monitor (ASUS XG17) main monitor while traveling secondary at home.
    Or use Docks/KVM functionality made for Notebooks to connect your SFF PC to a whole Desksetup... over a single USB-C! 1*AC cord and 1*USB-C... Displaysignal to the monitor and USB peripherals and Network (RJ45) from the monitor!
    People don't know that they need this... 😏

    • @infernaldaedra
      @infernaldaedra 2 года назад +1

      Exactly other comments were mentioning peripherals like drawing tablets, VR headsets, USB, or a high performance monitor

  • @TroggieAK
    @TroggieAK 2 года назад +2

    Me, watching this video at work on silent with auto-subtitles,
    “Man, this Andy guy sure is making an impressive GPS”

  • @SeanCMonahan
    @SeanCMonahan 2 года назад +3

    Are my eyes deceiving me, or do y'all have a new channel badge here on RUclips? It looks sweet! It's a bit brighter than it used to be, right? And I like the subtle gradient.

  • @rustycage82
    @rustycage82 2 года назад

    Cool video. I'm nervous every time that hammer moves.

  • @AspenTitan
    @AspenTitan 2 года назад +23

    It’s interesting to go through this video and see some of the new information that could get interpreted as pluses for 3D artists and (hobbyist) game devs like me, who don’t have all that much money for the newest 4090s and things like that. Seems like AMD could finally become the next big thing for people who do high-usage rendering and baking and game development rather than just gaming.

    • @klobiforpresident2254
      @klobiforpresident2254 2 года назад

      I think for baking few things beat a Fury X (same goes for frying eggs or cooking generally).

    • @sowa705
      @sowa705 2 года назад +1

      Amd is worthless for non gaming workloads - no software compatibility

  • @haikopaiko
    @haikopaiko 2 года назад +1

    Also great video and run through of RDNA 3, I think the team hit another home run with this one 👌

  • @Skubasteph
    @Skubasteph 2 года назад +5

    i imagine its also helps alot getting fab allocation when you can make half your gpu on an older node.

    • @infernaldaedra
      @infernaldaedra 2 года назад +1

      It also lets them keep tied with those foundries like globalfounderies which used be be AMDs own manufacturing business until they had to sell it off.

  • @ja6896
    @ja6896 2 года назад

    These are the kind of vids I subbed for. Great work

  • @lonelyone69
    @lonelyone69 2 года назад +6

    I think rx 7000 series has massive potential when it comes to later driver updates that utilizes more efficient pipelines and workload management.

    • @DigitalJedi
      @DigitalJedi 2 года назад

      I'm very excited for better ROCm and Open CL support. 24GB at 350W would be the max I can put in my case, and I already use ROCm for my machine learning workloads.

    • @lonelyone69
      @lonelyone69 2 года назад

      @@DigitalJedi I think it'll be amazing for laptops because individual chiplets could be bypassed to reduce power usage while still having enough power to do a task. Especially on more primitive renderers like open gl that don't follow an instruction set.

  • @Fractal_32
    @Fractal_32 2 года назад +1

    Thank you GamersNexus, I have been looking for a RDNA 3 architectural break down!

  • @hquest
    @hquest 2 года назад +5

    AMD Chill is one of the features I feel like I'm the only one in the world that uses it - and likes it. I don't need my VGA running a cinematic at 600 billion FPS, nor I need the card to be always running 110% all the time. If I AFK or the likes, I have no problems with the card dropping to 60FPS (if not more), and for that, AMD Chill is perfect to counterbalance the abuse the card is under - except it is already very well refrigerated via an oversized liquid loop.

    • @dylanherron3963
      @dylanherron3963 2 года назад

      ...did you just use VGA as "Video Graphics Adapter?" Not making fun, just haven't heard that term in 20 years or so. Reminds me of the Voodoo days.
      Mate runs his card in "chill" mode but has his card water-blocked anyway, lmao.

    • @JVlapleSlain
      @JVlapleSlain 2 года назад +2

      AMD Chill is absolutely brilliant, when i'm playing Civ 6 my 6900XT would be pulling 250w drawing 600fps for no reason. Set it to 75/144 and power consumption dropped down to 85w cutting out all the waste keeping temps low and the fans off

    • @dylanherron3963
      @dylanherron3963 2 года назад

      @@JVlapleSlain honestly, I'm not even mad. I came here cynically but may have learned aspects of my 6650XT that I'd like to explore further.
      What refresh rate are you guys running for monitors, curious.

    • @JVlapleSlain
      @JVlapleSlain 2 года назад +2

      @@dylanherron3963 I am on 3440x1440 ultrawide @ 144Hz w Freesync
      My Radeon Chill global settings is set to 75 floor to prevent sync flickering and 144 ceiling because i don't need more than 144fps on a 144hz display...
      That stops the card from running at full throttle on weaker games when it doesn't need to, reducing power, temps and stress(+longevity)

    • @hquest
      @hquest 2 года назад

      @@dylanherron3963 Calling GPU as your video card is like calling CPU your computer case - but that's outside the point. Call me old school if you want, but Voodoo were amazing cards ;)
      I run a 4k/120Hz screen. I don't cap it but leave Chill for 60FPS. Yes, the power draw is significant less, which means less heat, less noise and since cooled with water, I can game for an entire day and not even see the GPU reaching 55C, which is just amazing to extend its lifetime. Chills at 30-35C, depending on how warm I leave the house and how long I've been playing.

  • @RepsUp100
    @RepsUp100 2 года назад +1

    Can't wait for the reviews

  • @thygek.mikkelsen2324
    @thygek.mikkelsen2324 2 года назад +3

    You should really do a deep dive into Radeon Chill, it's awesome, I love it so much!

    • @LutraLovegood
      @LutraLovegood 2 года назад

      Used to use HiAlgo Chill and Boost in Skyrim at the same time. Hope he finally brings the ability to use both at the same time back. (the man behind the HiAlgo name got hired by AMD, hence why they added Boost and Chill)

  • @Benjamas-
    @Benjamas- 2 года назад +1

    Love these sort of deep dives, thanks for all your work

  • @dangingerich2559
    @dangingerich2559 2 года назад +6

    The MCD for this makes me think of how this might help the Ryzen. I wonder if it would hurt or help Ryzen to either eliminate the L3 cache per CCD and put the L3 on the memory controller like an MCD or eliminate the per core L2 and make the current L3 into a shared L2 per CCD and have the L3 on an MCD. Then instead of an IO die on the package, have an MCD and a separate IO die that would just have the PCIe and USB/SATA controllers separate that can be done with older manufacturing processes. What do you guys think?

    • @phantoslayer9332
      @phantoslayer9332 9 месяцев назад +1

      The latency would kill it, gpus aren’t super latency sensitive

    • @Razzbow
      @Razzbow 4 месяца назад

      Bingo. Bandwidth is king for GPUs​@@phantoslayer9332

  • @jayhsyn
    @jayhsyn 2 года назад +1

    Makes me really excited for the future of AMD tech

  • @reyluna0
    @reyluna0 2 года назад +23

    When I think Gamers Nexus:
    I think Quality, Integrity and Trustworthy. Thank you for doing content that's truly amazing

    • @reyluna0
      @reyluna0 2 года назад

      @@SoficalAspects aye, you understand

  • @RussLudwig
    @RussLudwig 2 года назад +2

    I'm not an architect either, but I very much liked the info here 👍 Hammertime!

  • @hamada3ido125
    @hamada3ido125 2 года назад +10

    we are less than one week from the actual release.
    I'm in so hype, can't wait for it to somehow get close to beat Nvidia.

    • @mjkittredge
      @mjkittredge 2 года назад +2

      costing 600 less is already beating NVIDIA. How close xtx gets to 4090 is just gravy

  • @daveme3582
    @daveme3582 2 года назад +1

    Who else couldn't take their eyes off the careless placement of that hammer on a fine 4080/4090 LOL
    Thanks Steve

  • @JamesonHuddle
    @JamesonHuddle 2 года назад +3

    I'm getting excited about possibly not "having" to get a 4090 for unreal engine and blender workloads, I can't wait for the full review!

    • @infernaldaedra
      @infernaldaedra 2 года назад +4

      You never really had to Blender on AMD works great even on older cards. Same with unreal lmao

    • @JamesonHuddle
      @JamesonHuddle 2 года назад +2

      @@infernaldaedra I put it in quotes for that exact reason. It doesn't change the fact that works great is a lot different from the 4090 which in very technical terms works twice as great. The 4090 is currently in a class of its own. Myself and others are hoping that AMD is about to change that.

    • @infernaldaedra
      @infernaldaedra 2 года назад

      @@JamesonHuddle Yeah basically for the time being. If the 7000 series is even within margin of the 4090 it shouldn't matter.

  • @johntang9173
    @johntang9173 2 года назад

    a very detailed lecture! Thanks GN!

  • @Hagop64
    @Hagop64 2 года назад +19

    Hoping their drivers match up with the hardware advances they've made. AMD GPUs are looking very promising if the drivers are solid.

    • @GamersNexus
      @GamersNexus  2 года назад +34

      The last generation of drivers certainly has been far better than the Vega days!

    • @lain2236ad
      @lain2236ad 2 года назад +12

      They're functionally flawless now

    • @aerosw1ft
      @aerosw1ft 2 года назад +7

      I really wish more people would do research before just uttering this again and again everywhere. Their drivers have been rock solid.

    • @SilentButDudley
      @SilentButDudley 2 года назад +6

      Drivers have been okay-good for a while now, ever since they fixed the drivers for the 5700xt series.

    • @stuartandrews4344
      @stuartandrews4344 2 года назад +1

      @@aerosw1ft I'll second that.. Can't fault the drivers,they have been faultless.

  • @paco1669
    @paco1669 2 года назад +1

    Love these deep dives!

  • @ronjatter
    @ronjatter 2 года назад +3

    I do enjoy the amd Radeon graphics card for computer. It is good. Soon there will be review. Yay!

  • @Dr.Ahmed86
    @Dr.Ahmed86 2 года назад

    Thank you Steve for the detailed session.

  • @franzescodimitra8815
    @franzescodimitra8815 2 года назад +3

    Early GN gang?

  • @geebsterswats
    @geebsterswats 2 года назад +1

    Enabling SAM on my PC brought Forza Horizon 5 from ~100fps to ~125fps. This was the only setting I changed between tests (and reboot to BIOS). I have a 6800xt phantom gaming, 3700x, and 3600 MT/s DDR4. This might be an outlier case with 25% uplift, or perhaps something else was going on. I just booted, ran the benchmark, rebooted to BIOS to enable ReBar, then ran the benchmark again. I did have to use mbr2gpt, but that was done before the first benchmark. Overall very happy with this setup. SOTR 1440p maxed out w/ medium ray tracing was ~116fps. I only tested that after enabling ReBar

  • @earnistse4899
    @earnistse4899 2 года назад +6

    Honestly I’m looking most forward to how the arc cards age as driver support improves and the upcoming battle mage gpus. The arc a770 on paper has TONS of transistors and cool features but the software hasn’t been there. I wouldn’t be surprised if in 6 months or so we start to see the a770 match or surpass the 3070 in many performance metrics.

    • @shepardpolska
      @shepardpolska 2 года назад +2

      Battlemage GPU. Singular. They got cut down to a single die for laptop

    • @earnistse4899
      @earnistse4899 2 года назад

      @@shepardpolska no desktop cards?

    • @shepardpolska
      @shepardpolska 2 года назад +1

      @@earnistse4899 acording to leaks, those got cut while they fiqure out how to make the GPUs scale to higher EU counts. Moore's law is dead talked about it a couple times

    • @earnistse4899
      @earnistse4899 2 года назад

      @@shepardpolska wow🤬🤬

    • @TheDravic
      @TheDravic 2 года назад

      @@shepardpolska Moore's Law is Dead is a fake news channel and you should feel bad for even giving him views.

  • @baozhongimperial3851
    @baozhongimperial3851 2 года назад

    Really interesting, thanks a lot for these deep dive videos !!

  • @DarkReturns1
    @DarkReturns1 2 года назад +5

    So excited for this release, AMD has a real opportunity here

  • @1977rmjr
    @1977rmjr 2 года назад

    Nice 👍 . Let's wait for real benchmarks!

  • @infernaldaedra
    @infernaldaedra 2 года назад +3

    Also I commend AMD for working on primitives and culling pipelines rather than focusing solely on black magic fuckery like DL aliasing or DL super sampling technologies that may improve performance but do don't perfectty represent the graphical images they are simulating.

  • @tuckerhiggins4336
    @tuckerhiggins4336 2 года назад +2

    100% worth marketing the card size. I won't get a 4080 or 4090 specifically because of the size

  • @alexd481
    @alexd481 2 года назад +5

    Haven't had an AMD GPU in years since they were still ATI. However, with EVGA no longer in the gpu business, I am considering a switch to AMD. My GTX 1070 is getting a little outdated.
    Does AMD have any plans for a 7000 series version of the 6800xt? The 7900 looks great, but a little pricey for my budget. If there was a 7800 for around 650 or less, that would be great.

    • @EdDale44135
      @EdDale44135 2 года назад +6

      They 7800 cards are expected to be released next year.

    • @dylanherron3963
      @dylanherron3963 2 года назад

      Yeah mate, as Ed stated. The ALMOST current number variation of GPU series (7000 series isn't technically out yet, so the "current" series is 6000, which will be "old" models in less than a week) will include the 7800 XT most likely. The flagship models are always released first.

    • @klbk
      @klbk 2 года назад +1

      Be aware that AMD GPUs can sometimes be tricky to handle. What I mean are the drivers for them. Regressions happen way more often than with nvidia. I'm not saying that they're crap in any way, I also own and tested few of their recent products already, but it's more common to get silly problems. Not as plug&play as nvidia, but when it works, it works beautifully. Just needs some troubleshooting sometimes and you should heavily test them while refund period lasts, to check if everything works in your case. I've heard Steve saying (about 1 month ago), about their drivers getting way more care than previously, so fingers crossed as I'm also interested to go AMD again.

    • @sammiller6631
      @sammiller6631 2 года назад +1

      @@klbk If Nvidia has been plug&play for you, you're lucky. Others have made more problems with Nvidia.

    • @klbk
      @klbk 2 года назад

      @@sammiller6631 Care to elaborate? What kind of problems did you have so far? Genuine question because I'm actually curious since I didn't encounter anything so far when building PCs. I might google them to get better informed for the future. In case of AMD I've encountered green line artifacting on my RX480 (desktop and games), driver timeouts after updating the driver (most recent case from about a year ago, happened multiple times), green screen crashes on 5600XT, Problems with rendering Chromium UI/tabs on both 5600XT and 5700XT, Anti-lag causing actual stuttering in some games, Enhanced sync causing black screen in games, ReLive replay causing stutters every few seconds while GPU not being 100% used (the same periods of time), driver causing CPU spikes and hanging the whole system for about a second, in every 5 seconds period when you turn off one of the displays in multi-display environment. Some of them got actually acknowledged and fixed, but some got only kinda fixed. The last example is still valid, but now it hangs for maybe 5 times and then system starts working normally again. Back then it was hanging indefinitely until you unplugged the display or turned it back on. These are some of the problems I personally encountered. The only thing that nvidia did horribly wrong for me was releasing a bad driver long years ago, which broke my 9600 GT completely and it wasn't recognizable by the system anymore. I remember this to be more widespread, so it wasn't a coincidence. Also obviously their Linux drivers were always awful and pathetic.

  • @nocturneuh
    @nocturneuh 2 года назад

    Thank you Steve / GN 🙏 awesome video.

  • @HalfUnder
    @HalfUnder 2 года назад +5

    All I need to know at this point is whether or not Steve and the rest of GN are excited about RDNA3 and it's potential. Not from the perspective of reviews, making content etc. Just whether or not you guys have seen or heard things that have you excited about how RDNA3 is gonna match up. Obviously it's not going to be anywhere near as good as far as RT goes but that's perfectly fine by me. I'm seriously considering a 7900xtx to replace my EVGA 3080 ftw3 ultra. After EVGA leaving the gpu market I kind of just want to put it on display to remember the good ole days.

    • @Axeiaa
      @Axeiaa 2 года назад +4

      Get a RX 7900 XTX from Sapphire ;). As NVIDIA has always had EVGA as a really great exclusive partner AMD has always had Sapphire as a really great exclusive partner. Or as the ultimate middle finger towards NVIDIA a card from XFX
      XFX used to be NVIDIA exclusive ages then shortly made cards for both companies, according to rumours this ticked NVIDIA off (how dare they have the audacity to sell the other brands cards!) and NVIDIA put them on the naughty list - XFX has been a partner to AMD exclusively ever since.

  • @alb.1911
    @alb.1911 2 года назад +2

    The power connector trolling is epic... 😆

  • @FaceyDuck
    @FaceyDuck 2 года назад +4

    Hey Steve, not a RDNA3 specific question but just wondering, what does the metal bracket around the GPU die do? Why are they only on high-end silicon?

    • @dano1307
      @dano1307 2 года назад

      havent watched the video yet but from what you are saying, if its what I think you mean, that is a bracket that is bent so the cooler/ heat sink has the correct amount of pressure on the die.

    • @FaceyDuck
      @FaceyDuck 2 года назад

      @@dano1307 You mean the leaf spring retention kit? I meant the metal rectangle on the actual GPU substrate. Does it still do the same thing as the retention kit?

  • @VaalkinTheOnly
    @VaalkinTheOnly 2 года назад +1

    Something about how transparent AMD is about their engineering is
    relieving

  • @nobody1322
    @nobody1322 2 года назад +3

    nvidias lack of 4090s right around christmas could really turn things around for AMD this christmas regarding gpu sales.

  • @blackhorseteck8381
    @blackhorseteck8381 2 года назад +2

    I see what you did there with the Nvidia GPU and the hammer on top of it, thanks Steve!

  • @falloutdog5275
    @falloutdog5275 2 года назад +3

    Nvidia has gotten anti-consumer to the point where I don’t really care about their new stuff anymore, wanna give AMD a try and see how good it is! Haven’t had one since the rx580 release, I’m actually excited

    • @gelinrefira
      @gelinrefira 2 года назад +1

      Agreed.

    • @dylanherron3963
      @dylanherron3963 2 года назад

      Brotha, I've owned a new 6650 XT refresh for about 3 and 1/2 months now, upgrading from a 4 year old (but still totally valid!) 1660ti. My first AMD product in a decade. I've had no driver issues, didn't have to upgrade my 650w PSU, (i7 9700 non-K) and I don't currently have interest in Raytracing (just WAY too hard of a performance hit for lighting and shadow effects so minimal, you'd have to point the differences out to me). It's ALMOST a no-compromise 1080p card. Whatever you play, you'll likely be locked to 144 fps (if that's your monitors refresh rate, that is) At 270 USD, it's been the best PC component purchase I've made in 6+ years. AMDs FSR is also coming a long way and is supported by SO many titles.
      If you like DLSS and Raytracing, I'd avoid the low/mid-range AMD cards. Just because I love them doesn't mean that you wouldn't have a better time with those features by going 3060 (12gb) or 3060 Ti. For the love of GOD, avoid that RX 64/6500 and 3060 8gb....

  • @xkxk7539
    @xkxk7539 2 года назад

    this is definitely going to hash at insane rates, cant wait to test it out!!

  • @03chrisv
    @03chrisv 2 года назад +4

    I'd wager a Ryzen 7900x with a Radeon 7900 XT is a potent combination while saving a few hundred bucks by not going balls to the wall with a Ryzen 7950x and Radeon 7900 XTX.

    • @snakkosss5380
      @snakkosss5380 2 года назад +1

      well i am going balls to the wall this round, i just hope to be able to snipe one 7900XTX at release date, i've already pre-ordered an EKWB block for that GPU. that being said i think your approach is far better than mine, saving money while keeping a high-end gaming experience.

    • @Micromation
      @Micromation 2 года назад +3

      Potent and saving money? Then try 7700X, imho 7900X doesn't fit the criteria.

    • @03chrisv
      @03chrisv 2 года назад +1

      @@snakkosss5380 There's nothing wrong with getting the absolute best that AMD has to offer, I salute you lol.
      I thought about going with a 7950X/7900 XTX combo but the savings of a couple hundred bucks can go towards more SSD storage or more ram. Most modern games still aren't designed to take advantage of anything more than 8 CPU threads (with some exceptions) so I thought the 12 core 24 thread 7900x is more than enough while still being very good at content creation tasks like video editing and rendering which are very CPU thread hungry.
      I plan on overclocking the 7900 XT to have the same core clock speed of the 7900 XTX. So instead of the 7900 XT being 10% to 15% slower than the 7900 XTX it'll probably only be 5% to 7% slower which I can live with. With the money savings I'll be able to go from 32GB of ram to 64GB which will be very useful for my needs outside of gaming.

    • @03chrisv
      @03chrisv 2 года назад

      @@Micromation It's not just for gaming, I need more core count for video editing/encoding as well as realistic 3D rendering for professional use.

    • @Micromation
      @Micromation 2 года назад

      @@03chrisv I'm 3D artist myself (Blender, ZBrush, Substance) - I actually can't remember when was last time I rendered anything on CPU 🤔 Previously CUDA and now OptiX is just way faster in pretty much every scenario. No idea how hardware support is in video editing software though. I'm actually getting AMD GPU for gaming now and keep my Titan in eGPU enclosure and render using that instead. (unfortunately AMD cards are completely useless for that for the last decade or so...)

  • @filipborch-solem1354
    @filipborch-solem1354 2 года назад +2

    Can’t wait til AMD releases their XRX 7990X XTXX gpu.

  • @TheOtherGuy1420
    @TheOtherGuy1420 Год назад +3

    I dont know why nobody is talking about the 7000 series GPU driver timeout issues with DirectX 12 games

  • @genghischuan4886
    @genghischuan4886 2 года назад +1

    Pretty sure Im buying one of these 7900xtx

  • @Akkbar21
    @Akkbar21 2 года назад +4

    The only reason I would struggle to buy an AMD GPU is my very poor experience with an ATI card way back in the day, plus horror stories about driver problems in the present. A thorough examination of wether Radeon drivers are as good as GeForce drivers at this point would be very useful. Could eliminate that concern.

    • @Animationsforever
      @Animationsforever 2 года назад +5

      The only things I've heard about driver issues recently was that there are none.

    • @aerosw1ft
      @aerosw1ft 2 года назад +1

      Using 6700xt for 2 months now and I've had no driver problems whatsoever. People really need to actually research before taking everything at face value then spitting them out again everywhere.

    • @singlsrvngfrnd
      @singlsrvngfrnd 2 года назад

      Haven't had a driver issue in 4 years. Started on an hd7970, moved to a 580, then 5600xt, now a 6800xt.

    • @nedegt1877
      @nedegt1877 2 года назад

      Your arguments are stereo type arguments. I think it's more horror when you have a chance to burn your house with melting connectors. Your drivers' arguments are too silly. Lots of games have issues but peoples blame AMD drivers for it. Education is important, don't just repeat stuff you read. Learn about things. Because you're being misled. The fastest Supercomputer in the world runs on all AMD hardware and they're doing important work on it. The next one on AMD CPU / GPU will be faster than all the top 10 Supercomputers combined!! So you think AMD can't make hardware and software that are stable? Get real!

    • @bslay4r
      @bslay4r 2 года назад

      If you decide to buy a Radeon card make sure to uninstall the NV driver before you make the switch. Preferably with DDU.

  • @seeibe
    @seeibe 2 года назад +1

    Culling on the GPU side is really interesting. On the software/CPU side it's often not worth it, because you usually load geometry to the GPU in huge batches and keep them there for more than one frame. Really cool to see AMD working on that stuff.

  • @tochztochz8002
    @tochztochz8002 2 года назад +2

    there are rumor Power color AIB for XT 1300 USD and XTX 1600 USD, no matter how much great thier achitecture are,With that price range I WILL GO FOR NVDIA I not a GPU manufacture's fanboy ,So I will go for what worth my money and AMD FOR 1300-1600 not worth it.

    • @SweatyFeetGirl
      @SweatyFeetGirl 2 года назад +3

      do you realize those "rumours" are just chinese scalpers selling before the launch?

    • @tochztochz8002
      @tochztochz8002 2 года назад

      @@SweatyFeetGirl My hope that rumour are false as you mention, I want to buy XTX in up coming 13th If the price are reasonable

    • @generaldane
      @generaldane 2 года назад +2

      Nvidia not worth it for those price ranges either to be fair

    • @tochztochz8002
      @tochztochz8002 2 года назад

      @@generaldane it's the best choice if AMD go for same price range , And Nvdia start to low MSRP

    • @generaldane
      @generaldane 2 года назад

      @@tochztochz8002 Best choice yeah but still not worth it imo

  • @pouriciel
    @pouriciel 2 года назад

    Thank you very much for speaking slower than usual. This make it way easier to listen to.

  • @RageQuitSon
    @RageQuitSon 2 года назад

    I like that light on the left near the Liquid nitro

  • @MarcsSpark
    @MarcsSpark 2 года назад

    i love that in the HYPR-RX slide the gpu being used is a radeon 7!

  • @19vangogh94
    @19vangogh94 2 года назад

    Beautiful analysis, loved it!

  • @carl8790
    @carl8790 2 года назад

    I've learned a ton from this, thanks.