As of a few days ago, there are a TON of new bot spam comments on YT. They copy/paste comments from others and have bot upvotes. Don't click their channels - all spam! Watch our video about physical card design from previously! ruclips.net/video/8eSzDVavC-U/видео.html We'll do a tear-down as soon as we have one we can take apart!
sorry my dear. no be sadness. but you no have strong understand four gpu. amd make no good after koduri leaving. my uncle raja koduri making four amd but they forgetting four him. please make video four him celebrashin. u are try your best and four this i have proudness. but please cut hair so i no make confushin if i liking ur physeeq. xcelent four trying and four this i am gr8ful. EDITING: What is this bot? I am indian and very proudness. No be race against me Stefen! You bad man with two much hairs and very arrogance! What india do four you. we are number one four it and computer. no nice. no nice!
Isn't Crucial DDR5 kinda terrible atm compared to the competition? Buildzoid mentioned that in a couple of his videos. I'd prefer to see actual good products being promoted in sponsorships, not ones that are merely mediocre at best.
I love your architecture deep dives. Feels like I was watching the Turing block diagram breakdown recently, cant believe that was 2018. Time flies. Thanks for the high quality content, as you maintained your quality and integrity even with a growing subscriber count.
@@GamersNexus yup. Just checked and youtube says it was 4 years ago with 73k views. You will probably get that many views in a day or two for this video. Pretty crazy.😊
Steve, I have to be honest here in saying that about 70-80%~ of this information sort of just goes straight over my head. _However_ you still present this stuff fast enough and interesting enough that I enjoy listening to it all anyways because I know I'll learn at least *something* here, and because this tech is just super fascinating to keep up with. Your analogy with the coaster (which I have by the way, super good coaster) and the GPU (which I don't have by the way) was a good and simple one that I felt got the point across nicely!
Same here....i hear him talking about stuff I have no clue about and I feel like that gif of mathematical equations floating around as someone looks confused lol.
This is why I love GNs. I watch a lot of other YT Tech channels for the high level overview but absolutely love the deep dives like this from GN. Even if I don't fully understand everything Steve and GN do a great job of explaining everything.
It’s going to be really interesting to see the day we start ‘gluing’ GCD’s together like Ryzen does CPU cores, hopefully it’s not much more than one more generation off.
@@tuckerhiggins4336 They said the interconnect isn't fast enough to let the GCDs co-operate properly, so it'd be a bit like SLI or Crossfire where they end up getting in each others way more than co-operating. But if they can solve that problem and get the interconnect speed up to where two GCDs can basically act like one GCD, it would allow for them to use multi GCD GPUs.
Thanks for another loaded release, Steve. As always, you guys cover everything we need to stay informed in an easily accessible format. Thanks for always keeping it real. Thanks to all your team for continuing to bring us only the best.
I really wish EVGA would consider making some AMD cards. It would be so amazing to see their production lines preserved and GPUs of their quality remain in the market...
Unfortunately, AMD card market is smaller, so they would only get to keep a fraction of their production, but I'd love to see them join - on the other hand, I feel like AMD has more really good exclusive AIBs (like XFX, Sapphire and PowerColor), so they would have a fair bit of hard competition.
A correction, culling is mostly for backface primitives, i.e. triangles that are facing away from the camera since that is a very quick check and easy to remove from the pipeline using fixed function blocks. The z-occlusion/buffering that is being described in the video is much more complex and usually isn't classified in the culling, since primitives can be partially visible among other things and thus can't be dropped from the pipeline.
3.5TB of bandwidth is no joke, it would mean 2nd gen infinity cache can send 3500 Bytes/ns(nanosecond), that would make traditional rasterization monstrously fast, rasterizing billions of triangles in micro-seconds..
@ラテちゃん they were doing that too, but rdna 3 is much better in ignoring triangles much early in the pipeline, anyway mesh shaders and modern engines already have good culling setup in software, rather than to rely on the hardware..
@@user-xl7ns4zt6z Culling has existed for a very long time. Even the N64 was capable of culling, I read an entire article talking about it in Nintendo Power back in 1996. It's just that applying it to ray-tracing is being more and more optimized. For rasterization, culling is more or less completely optimized/mature.
You usually need to compute only about 1.5 million triangles in native 1080p per frame if using Nanite or its equivalents, I don't know how much resources Lumen needs, though.
It's ALWAYS a balancing act. Even when you compare two cards where ONLY the video memory bandwidth is higher then the tradeoff is that it costs more. But in general you balance everything. And the SOFTWARE often has to make use of certain hardware changes. I remember people getting excited for AMD culling years ago but most games didn't make use of it at the time so we got a nice DEMO at the time and that was it.
Occlusion and culling is fascinating! John Carmack was absolutely genius in this respect, it was the single reason doom was able to run decently on 486 cpus. He used a binary tree method to find what needed to be drawn for every possible position and view point
John Carmack also is a special kind of a dirtbag for how he Bailed on Bethesda/IdSoftware/Zenimax and ran to Facebook with stolen engineering documents asking for $ and to become the lead of Oculus.
@@egalanos You can e.g. optimize how and how much data is transferred from GPU memory to the GPU. Same as for CPUs. The problems are latencies, if you read from a memory block it needs to be adressed etc. etc. (look at the timings you can adjust for DDR RAM) and afterwards it needs a few cycles to be ready again. So you are trying to transfer as much data as possible in one go. If you are transferring too small chunks, although you have a bigger local cache, you lose performance. If you transfer too much, the cache cannot take up all data. And not sure about caches in GPUs, but in CPUs you can mark certain cache content to remain in the cache so it will not be removed.
PyTorch and tensorflow. We are moving to the onnx open framework for inferencing. We had to do a lot of running on CPU cores because of the GPU shortage, so no big Nvidia specific optimizations but it’s relevant to mention that AMD isn’t in the AI game at all. I work more on the distributed systems architecture in our group.
@@iamwham amd does have an onnx end point for their gpus - it's not very performant but it does work I always wanted to work with distributed systems - oh well
@@infernaldaedra They sent their engineers to help. You say it like Nvidia paid for exclusivity. AMD could've done the same but pre-Lisa Su they didn't have someone with a good vision
What's missing in the video is the new VLIW2 ALUs, with 4 FP32 Flops/cycle instead of 2 like literally all other GPUs from all vendors. I'm really curious how this will perform in OpenCL compute.
It will depend a lot on your workload. Most of the time gpus end up been bottlenecked on things other than raw ALU compute but rather stuff like not having enough registers to run everything at once or being stalled on memory.
That was a great summary/recap of all info released so far, thanks! Just hoping that AMD will release an updated RDNA architecture whitepaper. As an engineer, I confess that I have a significant bias recently for AMD's CPUs and GPUs because I consider them innovative, elegant designs with well-balanced priorities and great execution. Competitors have their own strong points, from Intel with superior multithreaded scaling to NVidia with still-undisputed lead in RT & ML, but honestly I think AMD's offering are a much better fit for the vast majority of users's real-world needs.
bro never stop what you do, this is so much useful info to the average gamer/pc enthusiast i have been watching since i built my first pc in 2010 and not only did the info from this channel help me pick the best parts for price to this very day, it also showed me how to overclock/effectively use them to get the best out of em while averaging nice low consistent temps and insane clocks aswell as keeping all my components nice and dependable so ty man!
Type-C on the card is great for artists and anyone who uses a graphics tablet! They often use HDMI if there is no Thunderbolt or alt-display type C available, so for anyone who draws, it basically frees up the HDMI, and/or means you can skip a thunderbolt motherboard. Needing/wanting a thunderbolt compatible mobo really limits options. Specifically for me, my "good monitor" doesn't support advanced color spaces on displayport, only HDMI, so a type-c on the graphics card lets me plug in my drawing tablet, and use the HDMI for reference/preview. Or watching movies and stuff lol. I've been looking at the thunderbolt compatible motherboards, and some of the USB 4 options, but they're often much more expensive, rarely available in my preferred matx, and, I quite like my current motherboard.
Yer many high end monitors (for artist) end up benefiting from USB-C with display port tunnelling. One reason for this is these monitors also want to be useable by Mac owners and these users expect single cable connections with TB/USB-C
@@el-danihasbiarta1200 I have a idea already for how to do that, So many GPUs are already 2-4 slots so if a 3 slot card could easily fit more type c connections, but it would make gous even larger because each type c connection has to provide a wattage output unless it's functioning as a display only
It would be nice to see not just performance reviews when these launch, but quality as well. Given FSR, audio sound suppression, real time encoding, etc are all selling points, having that compared to the nvidia dlss, broadcast, etc features could be interesting. Probably not the same video, to be sure, but something of an in depth look at how effective the tech is would be great.
I imagine if they can get the architecture design to continue to scale and improve density. It will eventually take over servers at high densities, This kind of technology can potentially push the whole industry forward where we might see products like the Nvidia DG-X become obsolete and instead see things like racks, blades, or system that have huge bays of these kind of smaller but intricately Integrated GPU designs
@@infernaldaedra We can't shrink processes on silicon for ever. This may be the work around until a new paradigm in IC technology is rolled out. Bring on the photons!
I just got a good deal on a used 6800XT incl. waterblock. Couldn't be happier, although I must admit that the architectural advances of RDNA3 sound quite juicy :P
I'd like to add I am glad they are advertising the 8-pin. I know it seems silly, but the 12-pin represents a trend in modern PCs that I don't like, power draw getting so big that they need to make new connectors. Seeing the normal connectors tells me their GPUs wont draw ridiculous power and I much prefer that. Both for space/thermal performance and just flatout energy usage. I do care about power usage on my future PCs now.
A note on the out of order execution explanation given at 9:00 What Steve describes is actually pipelining (starting the next instruction before the previous one has finished). Out of order execution on the other hand is exactly what it says - executing instructions in an order that's different than the one given in the code. A modern processor has a multitude of execution units for different kinds of instructions (or sometimes it may have more than one unit for the same type). If an instruction will use an execution that's currently free, and that instruction does not depend on the results of any part of code ahead of it that still hasn't been executed, you can start that instruction ahead of its order in the code. This increases the overall utilization of different execution units inside the processors and thus raises the number of instructions being executed per clock.
I totally love how you explain things and how deep you routinely dive into whichever topic you are covering. Us true Nerds thank you. Keep up the great work. 🐺💜👼
Understanding your lack of knowledge on a particular subject (in this case CPU/GPU-architecture) is the sign of getting enough knowledge to understand your shortcomings. A good thing. It happens with any subject or skill at some time if you progress. I studied some computerarchitecture in subjects as part of my eductaion but not enough to understand all the details.
Definitely a good trait especially understanding that processors are designed by teams of hundreds of people not just one person, there are too many technologies and complexities in such a small thing that there isn't even close to enough time in a day to talk about everything that goes into a modern X86cpu or graphics card.
I switched to AMD because they seem to align with the way I see things, instead of throwing a big die and being wasteful they're more concerned about proper resource usage and optimization, the fun part is despite nVidia throwing all the money at it, AMD still manages to somewhat compete and beat specially at normal prices
@@Mark-kr5go These are not old GPUs, these are new GPUs. Why are you bringing old GPUs into the conversation about how new GPUs from AMD are slower at ray tracing than Nvidia's GPUs? Obviously if you want to keep playing Team Fortress 2 you don't really need to worry about buying anything newer than 2012.
@@the0000alex0000 well considering the 7900 xtx barely loses to the 4080 in raytracing tasks (between 3 percent and 15 percent) it should be able to do raytracing perfectly fine
Video released at 7 minutes ago Content 37 minutes People liking before they watch 100% like Your on top of your game Steve . Well done to all the staff @ gamers nexus
😂*THIS IS JUST IN!* i/o means i/o. Thanks for that cunning detailed well thought out description Steve! Now back to our sponsor where Sir-Oblivious will get us acquainted with the obvious 😉
22:32 the USB-C is also a DP 2.1 ..! More not less functionality! My 6900XT had one and with a single cable I was able to power and give signal to a mobile 17" monitor (ASUS XG17) main monitor while traveling secondary at home. Or use Docks/KVM functionality made for Notebooks to connect your SFF PC to a whole Desksetup... over a single USB-C! 1*AC cord and 1*USB-C... Displaysignal to the monitor and USB peripherals and Network (RJ45) from the monitor! People don't know that they need this... 😏
Are my eyes deceiving me, or do y'all have a new channel badge here on RUclips? It looks sweet! It's a bit brighter than it used to be, right? And I like the subtle gradient.
It’s interesting to go through this video and see some of the new information that could get interpreted as pluses for 3D artists and (hobbyist) game devs like me, who don’t have all that much money for the newest 4090s and things like that. Seems like AMD could finally become the next big thing for people who do high-usage rendering and baking and game development rather than just gaming.
I'm very excited for better ROCm and Open CL support. 24GB at 350W would be the max I can put in my case, and I already use ROCm for my machine learning workloads.
@@DigitalJedi I think it'll be amazing for laptops because individual chiplets could be bypassed to reduce power usage while still having enough power to do a task. Especially on more primitive renderers like open gl that don't follow an instruction set.
AMD Chill is one of the features I feel like I'm the only one in the world that uses it - and likes it. I don't need my VGA running a cinematic at 600 billion FPS, nor I need the card to be always running 110% all the time. If I AFK or the likes, I have no problems with the card dropping to 60FPS (if not more), and for that, AMD Chill is perfect to counterbalance the abuse the card is under - except it is already very well refrigerated via an oversized liquid loop.
...did you just use VGA as "Video Graphics Adapter?" Not making fun, just haven't heard that term in 20 years or so. Reminds me of the Voodoo days. Mate runs his card in "chill" mode but has his card water-blocked anyway, lmao.
AMD Chill is absolutely brilliant, when i'm playing Civ 6 my 6900XT would be pulling 250w drawing 600fps for no reason. Set it to 75/144 and power consumption dropped down to 85w cutting out all the waste keeping temps low and the fans off
@@JVlapleSlain honestly, I'm not even mad. I came here cynically but may have learned aspects of my 6650XT that I'd like to explore further. What refresh rate are you guys running for monitors, curious.
@@dylanherron3963 I am on 3440x1440 ultrawide @ 144Hz w Freesync My Radeon Chill global settings is set to 75 floor to prevent sync flickering and 144 ceiling because i don't need more than 144fps on a 144hz display... That stops the card from running at full throttle on weaker games when it doesn't need to, reducing power, temps and stress(+longevity)
@@dylanherron3963 Calling GPU as your video card is like calling CPU your computer case - but that's outside the point. Call me old school if you want, but Voodoo were amazing cards ;) I run a 4k/120Hz screen. I don't cap it but leave Chill for 60FPS. Yes, the power draw is significant less, which means less heat, less noise and since cooled with water, I can game for an entire day and not even see the GPU reaching 55C, which is just amazing to extend its lifetime. Chills at 30-35C, depending on how warm I leave the house and how long I've been playing.
Used to use HiAlgo Chill and Boost in Skyrim at the same time. Hope he finally brings the ability to use both at the same time back. (the man behind the HiAlgo name got hired by AMD, hence why they added Boost and Chill)
The MCD for this makes me think of how this might help the Ryzen. I wonder if it would hurt or help Ryzen to either eliminate the L3 cache per CCD and put the L3 on the memory controller like an MCD or eliminate the per core L2 and make the current L3 into a shared L2 per CCD and have the L3 on an MCD. Then instead of an IO die on the package, have an MCD and a separate IO die that would just have the PCIe and USB/SATA controllers separate that can be done with older manufacturing processes. What do you guys think?
@@infernaldaedra I put it in quotes for that exact reason. It doesn't change the fact that works great is a lot different from the 4090 which in very technical terms works twice as great. The 4090 is currently in a class of its own. Myself and others are hoping that AMD is about to change that.
Enabling SAM on my PC brought Forza Horizon 5 from ~100fps to ~125fps. This was the only setting I changed between tests (and reboot to BIOS). I have a 6800xt phantom gaming, 3700x, and 3600 MT/s DDR4. This might be an outlier case with 25% uplift, or perhaps something else was going on. I just booted, ran the benchmark, rebooted to BIOS to enable ReBar, then ran the benchmark again. I did have to use mbr2gpt, but that was done before the first benchmark. Overall very happy with this setup. SOTR 1440p maxed out w/ medium ray tracing was ~116fps. I only tested that after enabling ReBar
Honestly I’m looking most forward to how the arc cards age as driver support improves and the upcoming battle mage gpus. The arc a770 on paper has TONS of transistors and cool features but the software hasn’t been there. I wouldn’t be surprised if in 6 months or so we start to see the a770 match or surpass the 3070 in many performance metrics.
@@earnistse4899 acording to leaks, those got cut while they fiqure out how to make the GPUs scale to higher EU counts. Moore's law is dead talked about it a couple times
Also I commend AMD for working on primitives and culling pipelines rather than focusing solely on black magic fuckery like DL aliasing or DL super sampling technologies that may improve performance but do don't perfectty represent the graphical images they are simulating.
Haven't had an AMD GPU in years since they were still ATI. However, with EVGA no longer in the gpu business, I am considering a switch to AMD. My GTX 1070 is getting a little outdated. Does AMD have any plans for a 7000 series version of the 6800xt? The 7900 looks great, but a little pricey for my budget. If there was a 7800 for around 650 or less, that would be great.
Yeah mate, as Ed stated. The ALMOST current number variation of GPU series (7000 series isn't technically out yet, so the "current" series is 6000, which will be "old" models in less than a week) will include the 7800 XT most likely. The flagship models are always released first.
Be aware that AMD GPUs can sometimes be tricky to handle. What I mean are the drivers for them. Regressions happen way more often than with nvidia. I'm not saying that they're crap in any way, I also own and tested few of their recent products already, but it's more common to get silly problems. Not as plug&play as nvidia, but when it works, it works beautifully. Just needs some troubleshooting sometimes and you should heavily test them while refund period lasts, to check if everything works in your case. I've heard Steve saying (about 1 month ago), about their drivers getting way more care than previously, so fingers crossed as I'm also interested to go AMD again.
@@sammiller6631 Care to elaborate? What kind of problems did you have so far? Genuine question because I'm actually curious since I didn't encounter anything so far when building PCs. I might google them to get better informed for the future. In case of AMD I've encountered green line artifacting on my RX480 (desktop and games), driver timeouts after updating the driver (most recent case from about a year ago, happened multiple times), green screen crashes on 5600XT, Problems with rendering Chromium UI/tabs on both 5600XT and 5700XT, Anti-lag causing actual stuttering in some games, Enhanced sync causing black screen in games, ReLive replay causing stutters every few seconds while GPU not being 100% used (the same periods of time), driver causing CPU spikes and hanging the whole system for about a second, in every 5 seconds period when you turn off one of the displays in multi-display environment. Some of them got actually acknowledged and fixed, but some got only kinda fixed. The last example is still valid, but now it hangs for maybe 5 times and then system starts working normally again. Back then it was hanging indefinitely until you unplugged the display or turned it back on. These are some of the problems I personally encountered. The only thing that nvidia did horribly wrong for me was releasing a bad driver long years ago, which broke my 9600 GT completely and it wasn't recognizable by the system anymore. I remember this to be more widespread, so it wasn't a coincidence. Also obviously their Linux drivers were always awful and pathetic.
All I need to know at this point is whether or not Steve and the rest of GN are excited about RDNA3 and it's potential. Not from the perspective of reviews, making content etc. Just whether or not you guys have seen or heard things that have you excited about how RDNA3 is gonna match up. Obviously it's not going to be anywhere near as good as far as RT goes but that's perfectly fine by me. I'm seriously considering a 7900xtx to replace my EVGA 3080 ftw3 ultra. After EVGA leaving the gpu market I kind of just want to put it on display to remember the good ole days.
Get a RX 7900 XTX from Sapphire ;). As NVIDIA has always had EVGA as a really great exclusive partner AMD has always had Sapphire as a really great exclusive partner. Or as the ultimate middle finger towards NVIDIA a card from XFX XFX used to be NVIDIA exclusive ages then shortly made cards for both companies, according to rumours this ticked NVIDIA off (how dare they have the audacity to sell the other brands cards!) and NVIDIA put them on the naughty list - XFX has been a partner to AMD exclusively ever since.
havent watched the video yet but from what you are saying, if its what I think you mean, that is a bracket that is bent so the cooler/ heat sink has the correct amount of pressure on the die.
@@dano1307 You mean the leaf spring retention kit? I meant the metal rectangle on the actual GPU substrate. Does it still do the same thing as the retention kit?
Nvidia has gotten anti-consumer to the point where I don’t really care about their new stuff anymore, wanna give AMD a try and see how good it is! Haven’t had one since the rx580 release, I’m actually excited
Brotha, I've owned a new 6650 XT refresh for about 3 and 1/2 months now, upgrading from a 4 year old (but still totally valid!) 1660ti. My first AMD product in a decade. I've had no driver issues, didn't have to upgrade my 650w PSU, (i7 9700 non-K) and I don't currently have interest in Raytracing (just WAY too hard of a performance hit for lighting and shadow effects so minimal, you'd have to point the differences out to me). It's ALMOST a no-compromise 1080p card. Whatever you play, you'll likely be locked to 144 fps (if that's your monitors refresh rate, that is) At 270 USD, it's been the best PC component purchase I've made in 6+ years. AMDs FSR is also coming a long way and is supported by SO many titles. If you like DLSS and Raytracing, I'd avoid the low/mid-range AMD cards. Just because I love them doesn't mean that you wouldn't have a better time with those features by going 3060 (12gb) or 3060 Ti. For the love of GOD, avoid that RX 64/6500 and 3060 8gb....
I'd wager a Ryzen 7900x with a Radeon 7900 XT is a potent combination while saving a few hundred bucks by not going balls to the wall with a Ryzen 7950x and Radeon 7900 XTX.
well i am going balls to the wall this round, i just hope to be able to snipe one 7900XTX at release date, i've already pre-ordered an EKWB block for that GPU. that being said i think your approach is far better than mine, saving money while keeping a high-end gaming experience.
@@snakkosss5380 There's nothing wrong with getting the absolute best that AMD has to offer, I salute you lol. I thought about going with a 7950X/7900 XTX combo but the savings of a couple hundred bucks can go towards more SSD storage or more ram. Most modern games still aren't designed to take advantage of anything more than 8 CPU threads (with some exceptions) so I thought the 12 core 24 thread 7900x is more than enough while still being very good at content creation tasks like video editing and rendering which are very CPU thread hungry. I plan on overclocking the 7900 XT to have the same core clock speed of the 7900 XTX. So instead of the 7900 XT being 10% to 15% slower than the 7900 XTX it'll probably only be 5% to 7% slower which I can live with. With the money savings I'll be able to go from 32GB of ram to 64GB which will be very useful for my needs outside of gaming.
@@03chrisv I'm 3D artist myself (Blender, ZBrush, Substance) - I actually can't remember when was last time I rendered anything on CPU 🤔 Previously CUDA and now OptiX is just way faster in pretty much every scenario. No idea how hardware support is in video editing software though. I'm actually getting AMD GPU for gaming now and keep my Titan in eGPU enclosure and render using that instead. (unfortunately AMD cards are completely useless for that for the last decade or so...)
The only reason I would struggle to buy an AMD GPU is my very poor experience with an ATI card way back in the day, plus horror stories about driver problems in the present. A thorough examination of wether Radeon drivers are as good as GeForce drivers at this point would be very useful. Could eliminate that concern.
Using 6700xt for 2 months now and I've had no driver problems whatsoever. People really need to actually research before taking everything at face value then spitting them out again everywhere.
Your arguments are stereo type arguments. I think it's more horror when you have a chance to burn your house with melting connectors. Your drivers' arguments are too silly. Lots of games have issues but peoples blame AMD drivers for it. Education is important, don't just repeat stuff you read. Learn about things. Because you're being misled. The fastest Supercomputer in the world runs on all AMD hardware and they're doing important work on it. The next one on AMD CPU / GPU will be faster than all the top 10 Supercomputers combined!! So you think AMD can't make hardware and software that are stable? Get real!
Culling on the GPU side is really interesting. On the software/CPU side it's often not worth it, because you usually load geometry to the GPU in huge batches and keep them there for more than one frame. Really cool to see AMD working on that stuff.
there are rumor Power color AIB for XT 1300 USD and XTX 1600 USD, no matter how much great thier achitecture are,With that price range I WILL GO FOR NVDIA I not a GPU manufacture's fanboy ,So I will go for what worth my money and AMD FOR 1300-1600 not worth it.
As of a few days ago, there are a TON of new bot spam comments on YT. They copy/paste comments from others and have bot upvotes. Don't click their channels - all spam!
Watch our video about physical card design from previously! ruclips.net/video/8eSzDVavC-U/видео.html
We'll do a tear-down as soon as we have one we can take apart!
sorry my dear. no be sadness. but you no have strong understand four gpu. amd make no good after koduri leaving. my uncle raja koduri making four amd but they forgetting four him. please make video four him celebrashin. u are try your best and four this i have proudness. but please cut hair so i no make confushin if i liking ur physeeq. xcelent four trying and four this i am gr8ful. EDITING: What is this bot? I am indian and very proudness. No be race against me Stefen! You bad man with two much hairs and very arrogance! What india do four you. we are number one four it and computer. no nice. no nice!
@@RanjakarPatel वह महान थे।
विष्णु उसे आशीर्वाद देंगे।
Isn't Crucial DDR5 kinda terrible atm compared to the competition? Buildzoid mentioned that in a couple of his videos. I'd prefer to see actual good products being promoted in sponsorships, not ones that are merely mediocre at best.
when will you guys be able to post benchmarks? a few days before launch?
My heart dropped, thought the review embargo was lifted.
Ikr, can we just get our fucking review already.
@@RexZShadow We aren't in control of that timing
I would have cried actual tears.
@@GamersNexus about to hibernate until the embargo lifts XD
@@GamersNexus any idea when the embargo lifts?
I love your architecture deep dives. Feels like I was watching the Turing block diagram breakdown recently, cant believe that was 2018. Time flies. Thanks for the high quality content, as you maintained your quality and integrity even with a growing subscriber count.
Wow! It's been a while since that one!
@@GamersNexus yup. Just checked and youtube says it was 4 years ago with 73k views. You will probably get that many views in a day or two for this video. Pretty crazy.😊
@@Jsteeezz 13k in 21 minutes lmao
33k at an hour
@@metalmaniac788 47k in that same hour.
“You come here for this kind of depth. The I/O die has I/O”
Thank you Steve, this got me good.
Steve, I have to be honest here in saying that about 70-80%~ of this information sort of just goes straight over my head. _However_ you still present this stuff fast enough and interesting enough that I enjoy listening to it all anyways because I know I'll learn at least *something* here, and because this tech is just super fascinating to keep up with. Your analogy with the coaster (which I have by the way, super good coaster) and the GPU (which I don't have by the way) was a good and simple one that I felt got the point across nicely!
Same here....i hear him talking about stuff I have no clue about and I feel like that gif of mathematical equations floating around as someone looks confused lol.
This is why I love GNs. I watch a lot of other YT Tech channels for the high level overview but absolutely love the deep dives like this from GN. Even if I don't fully understand everything Steve and GN do a great job of explaining everything.
It’s going to be really interesting to see the day we start ‘gluing’ GCD’s together like Ryzen does CPU cores, hopefully it’s not much more than one more generation off.
Seems like it'll happen eventually!
In GN's engineer video, Amd said that wasn't practical or something like that
@@tuckerhiggins4336 They said the interconnect isn't fast enough to let the GCDs co-operate properly, so it'd be a bit like SLI or Crossfire where they end up getting in each others way more than co-operating. But if they can solve that problem and get the interconnect speed up to where two GCDs can basically act like one GCD, it would allow for them to use multi GCD GPUs.
@@MrHamof I think I heard a number needed somewhere for the interconnect speed needed, something absurd like 7tb/s. Who knows
@@tuckerhiggins4336 Isn't their current MCD to GCD bandwidth like 5TB/s, though. Doesn't seem like the far future.
Thanks for another loaded release, Steve. As always, you guys cover everything we need to stay informed in an easily accessible format. Thanks for always keeping it real. Thanks to all your team for continuing to bring us only the best.
I really wish EVGA would consider making some AMD cards. It would be so amazing to see their production lines preserved and GPUs of their quality remain in the market...
Ik if they aren't making NVIDIA It would be awesome I've always wanted a AMD EVGA card.
They wouldn't survive the Sapphire and PowerColor market I don't think, it's probably why they don't
Unfortunately, AMD card market is smaller, so they would only get to keep a fraction of their production, but I'd love to see them join - on the other hand, I feel like AMD has more really good exclusive AIBs (like XFX, Sapphire and PowerColor), so they would have a fair bit of hard competition.
maybe intel should consider to hire evga as a new brand partner
@@arfianwismiga5912 Intel is going to leave the GPU market in a year or so anyway.
A correction, culling is mostly for backface primitives, i.e. triangles that are facing away from the camera since that is a very quick check and easy to remove from the pipeline using fixed function blocks. The z-occlusion/buffering that is being described in the video is much more complex and usually isn't classified in the culling, since primitives can be partially visible among other things and thus can't be dropped from the pipeline.
Nobody goes into such detail as GN thanks bro!
Even back when they only did video game reviews GN was litt
3.5TB of bandwidth is no joke, it would mean 2nd gen infinity cache can send 3500 Bytes/ns(nanosecond), that would make traditional rasterization monstrously fast, rasterizing billions of triangles in micro-seconds..
@ラテちゃん they were doing that too, but rdna 3 is much better in ignoring triangles much early in the pipeline, anyway mesh shaders and modern engines already have good culling setup in software, rather than to rely on the hardware..
@@user-xl7ns4zt6z Culling has existed for a very long time. Even the N64 was capable of culling, I read an entire article talking about it in Nintendo Power back in 1996.
It's just that applying it to ray-tracing is being more and more optimized. For rasterization, culling is more or less completely optimized/mature.
You usually need to compute only about 1.5 million triangles in native 1080p per frame if using Nanite or its equivalents, I don't know how much resources Lumen needs, though.
It's ALWAYS a balancing act.
Even when you compare two cards where ONLY the video memory bandwidth is higher then the tradeoff is that it costs more. But in general you balance everything. And the SOFTWARE often has to make use of certain hardware changes. I remember people getting excited for AMD culling years ago but most games didn't make use of it at the time so we got a nice DEMO at the time and that was it.
It's 5.3TB per second not 3.5 and yes to everything else
Occlusion and culling is fascinating! John Carmack was absolutely genius in this respect, it was the single reason doom was able to run decently on 486 cpus. He used a binary tree method to find what needed to be drawn for every possible position and view point
close! it was a tree, but not a binary tree, it used the mini max algorithm
Well the real game changer was the shotgun and of course later the super shotgun but ok
:)
Wasn’t it called a BSP tree (Binary Space Partitioning)?
John Carmack also is a special kind of a dirtbag for how he Bailed on Bethesda/IdSoftware/Zenimax and ran to Facebook with stolen engineering documents asking for $ and to become the lead of Oculus.
It will run well in high detail and full screen even on 386 machines, if you had a fast enough VGA card!
Thanks for a great breakdown! My Modmat arrived around Thanksgiving. Very happy that I could support the chan finally!
What I take away is AMD will keep supporting and optimizing games for Infinity cache. So RX 6000 GPUs won't fall far behind in newer games.
Infinity cache seems really important in the future of SSD based games
If that happens, RX 6000 series will age pretty good, unlike rtx 3000
The cache is transparent to the application so not sure what you are thinking they would optimise for?
@@DragonOfTheMortalKombat doubt, games will drag behind because of consoles. We will have to wait for PS6 to move forward in graphics
@@egalanos You can e.g. optimize how and how much data is transferred from GPU memory to the GPU. Same as for CPUs.
The problems are latencies, if you read from a memory block it needs to be adressed etc. etc. (look at the timings you can adjust for DDR RAM) and afterwards it needs a few cycles to be ready again. So you are trying to transfer as much data as possible in one go.
If you are transferring too small chunks, although you have a bigger local cache, you lose performance. If you transfer too much, the cache cannot take up all data.
And not sure about caches in GPUs, but in CPUs you can mark certain cache content to remain in the cache so it will not be removed.
The amount of details in your videos is just insane ! Love this channel 🥰
I’m a software developer working on AI with interest in GPU architecture. Also an avid gamer. I love this content, keep it coming!
What is the stack you are usually working with?
Pytorch compatibility layer to tensor or direct CUDA or other stacks
@@aravindpallippara1577 probably cuda because NVIDIA pays so many software companies to develop for their own compute.
PyTorch and tensorflow. We are moving to the onnx open framework for inferencing. We had to do a lot of running on CPU cores because of the GPU shortage, so no big Nvidia specific optimizations but it’s relevant to mention that AMD isn’t in the AI game at all. I work more on the distributed systems architecture in our group.
@@iamwham amd does have an onnx end point for their gpus - it's not very performant but it does work
I always wanted to work with distributed systems - oh well
@@infernaldaedra They sent their engineers to help. You say it like Nvidia paid for exclusivity.
AMD could've done the same but pre-Lisa Su they didn't have someone with a good vision
I'm really excited to see what Smart Access Video is actually capable of!
What's missing in the video is the new VLIW2 ALUs, with 4 FP32 Flops/cycle instead of 2 like literally all other GPUs from all vendors. I'm really curious how this will perform in OpenCL compute.
It will depend a lot on your workload. Most of the time gpus end up been bottlenecked on things other than raw ALU compute but rather stuff like not having enough registers to run everything at once or being stalled on memory.
Definitely going to be keeping an eye on benchmarks
Isn’t that just matching nvidia for fp32? Or what am I missing here?
@@organichand-pickedfree-ran1463 yes, both doubled FP32 throughput
So cool to mark the area you are talking about (usually listen to your videos as kind of podcast so couldn't notice before, sorry)
That was a great summary/recap of all info released so far, thanks! Just hoping that AMD will release an updated RDNA architecture whitepaper. As an engineer, I confess that I have a significant bias recently for AMD's CPUs and GPUs because I consider them innovative, elegant designs with well-balanced priorities and great execution. Competitors have their own strong points, from Intel with superior multithreaded scaling to NVidia with still-undisputed lead in RT & ML, but honestly I think AMD's offering are a much better fit for the vast majority of users's real-world needs.
Merci pour cette chouette vidéo, comme de coutume. C'est toujours un plaisir de passer par ici ;)
(Trop la flemme d'essayer de commenter en anglais)
If the USB-C implementation is the same as RDNA2, you can plug storage and other devices into it too.
bro never stop what you do, this is so much useful info to the average gamer/pc enthusiast i have been watching since i built my first pc in 2010 and not only did the info from this channel help me pick the best parts for price to this very day, it also showed me how to overclock/effectively use them to get the best out of em while averaging nice low consistent temps and insane clocks aswell as keeping all my components nice and dependable so ty man!
Type-C on the card is great for artists and anyone who uses a graphics tablet! They often use HDMI if there is no Thunderbolt or alt-display type C available, so for anyone who draws, it basically frees up the HDMI, and/or means you can skip a thunderbolt motherboard. Needing/wanting a thunderbolt compatible mobo really limits options.
Specifically for me, my "good monitor" doesn't support advanced color spaces on displayport, only HDMI, so a type-c on the graphics card lets me plug in my drawing tablet, and use the HDMI for reference/preview. Or watching movies and stuff lol. I've been looking at the thunderbolt compatible motherboards, and some of the USB 4 options, but they're often much more expensive, rarely available in my preferred matx, and, I quite like my current motherboard.
We need more usb c port on gpu..just want some simple cable from pc to monitor and make a monitor as usb hub
Yer many high end monitors (for artist) end up benefiting from USB-C with display port tunnelling. One reason for this is these monitors also want to be useable by Mac owners and these users expect single cable connections with TB/USB-C
You kind of just convinced me to get a drawing tablet just now lol
@@el-danihasbiarta1200 I have a idea already for how to do that, So many GPUs are already 2-4 slots so if a 3 slot card could easily fit more type c connections, but it would make gous even larger because each type c connection has to provide a wattage output unless it's functioning as a display only
I went with the aorus master x570 as it has a thunderbolt 3 header.
It would be nice to see not just performance reviews when these launch, but quality as well. Given FSR, audio sound suppression, real time encoding, etc are all selling points, having that compared to the nvidia dlss, broadcast, etc features could be interesting. Probably not the same video, to be sure, but something of an in depth look at how effective the tech is would be great.
Content!
Kinda stoked on how Navi 40 and beyond will shake out.. chiplet gpus are neat.
I imagine if they can get the architecture design to continue to scale and improve density. It will eventually take over servers at high densities, This kind of technology can potentially push the whole industry forward where we might see products like the Nvidia DG-X become obsolete and instead see things like racks, blades, or system that have huge bays of these kind of smaller but intricately Integrated GPU designs
@@infernaldaedra We can't shrink processes on silicon for ever. This may be the work around until a new paradigm in IC technology is rolled out. Bring on the photons!
Thanks Steve. ... and thanks to the entire GN crew.
I just got a good deal on a used 6800XT incl. waterblock. Couldn't be happier, although I must admit that the architectural advances of RDNA3 sound quite juicy :P
I'd like to add
I am glad they are advertising the 8-pin. I know it seems silly, but the 12-pin represents a trend in modern PCs that I don't like, power draw getting so big that they need to make new connectors. Seeing the normal connectors tells me their GPUs wont draw ridiculous power and I much prefer that. Both for space/thermal performance and just flatout energy usage.
I do care about power usage on my future PCs now.
The RTX40 series draw significantly less power than the 30 series in actual gaming but most people are still misinformed.
@@EarthIsFlat456 The RTX40 series still draws significant power. That's why they catch fire.
@@sammiller6631 No 4090 has caught fire, and the reason for any melting is an improper connection.
This is a fantastic deep dive. Very interesting, and very well explained to a person who isn't an engineer.
A note on the out of order execution explanation given at 9:00 What Steve describes is actually pipelining (starting the next instruction before the previous one has finished). Out of order execution on the other hand is exactly what it says - executing instructions in an order that's different than the one given in the code. A modern processor has a multitude of execution units for different kinds of instructions (or sometimes it may have more than one unit for the same type). If an instruction will use an execution that's currently free, and that instruction does not depend on the results of any part of code ahead of it that still hasn't been executed, you can start that instruction ahead of its order in the code. This increases the overall utilization of different execution units inside the processors and thus raises the number of instructions being executed per clock.
Always love these, really teaches a lot about the hardware and it’s little intricacies. Thanks Steve and the team at Gamer’s Nexus!
I totally love how you explain things and how deep you routinely dive into whichever topic you are covering. Us true Nerds thank you. Keep up the great work. 🐺💜👼
Understanding your lack of knowledge on a particular subject (in this case CPU/GPU-architecture) is the sign of getting enough knowledge to understand your shortcomings. A good thing. It happens with any subject or skill at some time if you progress. I studied some computerarchitecture in subjects as part of my eductaion but not enough to understand all the details.
Definitely a good trait especially understanding that processors are designed by teams of hundreds of people not just one person, there are too many technologies and complexities in such a small thing that there isn't even close to enough time in a day to talk about everything that goes into a modern X86cpu or graphics card.
@@1st_DiamondHog Its called Illusory superiority. Dunning Kruger is a narrow subset.
Smart access video is actually really exciting and interesting. Alot of cool technologies here
I'm seriously incredibly grateful for this. I absolutely love this. Thank you game's nexus
Very interesting and informative as always. Insert the "Thank you Steve" meme here.
I switched to AMD because they seem to align with the way I see things, instead of throwing a big die and being wasteful they're more concerned about proper resource usage and optimization, the fun part is despite nVidia throwing all the money at it, AMD still manages to somewhat compete and beat specially at normal prices
Except that amd loses pretty bad in rt
@@helloguy8934 I don't really get the obsession with rt when most people are still using old-ass gpus that can't hit 60fps ultra on new games.
@@Mark-kr5go These are not old GPUs, these are new GPUs. Why are you bringing old GPUs into the conversation about how new GPUs from AMD are slower at ray tracing than Nvidia's GPUs?
Obviously if you want to keep playing Team Fortress 2 you don't really need to worry about buying anything newer than 2012.
@@Mark-kr5go maybe consider not being salty when people enjoy good performance on RT like every AMD fanboy out there.
@@the0000alex0000 well considering the 7900 xtx barely loses to the 4080 in raytracing tasks (between 3 percent and 15 percent) it should be able to do raytracing perfectly fine
I love these videos that you do Steve and GN crew!
Video released at 7 minutes ago
Content 37 minutes
People liking before they watch
100% like
Your on top of your game Steve .
Well done to all the staff @ gamers nexus
They just know it will be the best content available on the subject
Great stuff, I'm excited for the reviews.
Lol the hammer just resting on the gpu on the table 6:00 😂
It's a metaphor for RDNA3 hammering what looks like 4080 card ...
It was super ominous LOL. Yes folks was nervous for sure.
😂*THIS IS JUST IN!* i/o means i/o. Thanks for that cunning detailed well thought out description Steve! Now back to our sponsor where Sir-Oblivious will get us acquainted with the obvious 😉
22:32 the USB-C is also a DP 2.1 ..! More not less functionality! My 6900XT had one and with a single cable I was able to power and give signal to a mobile 17" monitor (ASUS XG17) main monitor while traveling secondary at home.
Or use Docks/KVM functionality made for Notebooks to connect your SFF PC to a whole Desksetup... over a single USB-C! 1*AC cord and 1*USB-C... Displaysignal to the monitor and USB peripherals and Network (RJ45) from the monitor!
People don't know that they need this... 😏
Exactly other comments were mentioning peripherals like drawing tablets, VR headsets, USB, or a high performance monitor
Me, watching this video at work on silent with auto-subtitles,
“Man, this Andy guy sure is making an impressive GPS”
Are my eyes deceiving me, or do y'all have a new channel badge here on RUclips? It looks sweet! It's a bit brighter than it used to be, right? And I like the subtle gradient.
Pretty sure it's the same
Cool video. I'm nervous every time that hammer moves.
It’s interesting to go through this video and see some of the new information that could get interpreted as pluses for 3D artists and (hobbyist) game devs like me, who don’t have all that much money for the newest 4090s and things like that. Seems like AMD could finally become the next big thing for people who do high-usage rendering and baking and game development rather than just gaming.
I think for baking few things beat a Fury X (same goes for frying eggs or cooking generally).
Amd is worthless for non gaming workloads - no software compatibility
Also great video and run through of RDNA 3, I think the team hit another home run with this one 👌
i imagine its also helps alot getting fab allocation when you can make half your gpu on an older node.
It also lets them keep tied with those foundries like globalfounderies which used be be AMDs own manufacturing business until they had to sell it off.
These are the kind of vids I subbed for. Great work
I think rx 7000 series has massive potential when it comes to later driver updates that utilizes more efficient pipelines and workload management.
I'm very excited for better ROCm and Open CL support. 24GB at 350W would be the max I can put in my case, and I already use ROCm for my machine learning workloads.
@@DigitalJedi I think it'll be amazing for laptops because individual chiplets could be bypassed to reduce power usage while still having enough power to do a task. Especially on more primitive renderers like open gl that don't follow an instruction set.
Thank you GamersNexus, I have been looking for a RDNA 3 architectural break down!
AMD Chill is one of the features I feel like I'm the only one in the world that uses it - and likes it. I don't need my VGA running a cinematic at 600 billion FPS, nor I need the card to be always running 110% all the time. If I AFK or the likes, I have no problems with the card dropping to 60FPS (if not more), and for that, AMD Chill is perfect to counterbalance the abuse the card is under - except it is already very well refrigerated via an oversized liquid loop.
...did you just use VGA as "Video Graphics Adapter?" Not making fun, just haven't heard that term in 20 years or so. Reminds me of the Voodoo days.
Mate runs his card in "chill" mode but has his card water-blocked anyway, lmao.
AMD Chill is absolutely brilliant, when i'm playing Civ 6 my 6900XT would be pulling 250w drawing 600fps for no reason. Set it to 75/144 and power consumption dropped down to 85w cutting out all the waste keeping temps low and the fans off
@@JVlapleSlain honestly, I'm not even mad. I came here cynically but may have learned aspects of my 6650XT that I'd like to explore further.
What refresh rate are you guys running for monitors, curious.
@@dylanherron3963 I am on 3440x1440 ultrawide @ 144Hz w Freesync
My Radeon Chill global settings is set to 75 floor to prevent sync flickering and 144 ceiling because i don't need more than 144fps on a 144hz display...
That stops the card from running at full throttle on weaker games when it doesn't need to, reducing power, temps and stress(+longevity)
@@dylanherron3963 Calling GPU as your video card is like calling CPU your computer case - but that's outside the point. Call me old school if you want, but Voodoo were amazing cards ;)
I run a 4k/120Hz screen. I don't cap it but leave Chill for 60FPS. Yes, the power draw is significant less, which means less heat, less noise and since cooled with water, I can game for an entire day and not even see the GPU reaching 55C, which is just amazing to extend its lifetime. Chills at 30-35C, depending on how warm I leave the house and how long I've been playing.
Can't wait for the reviews
You should really do a deep dive into Radeon Chill, it's awesome, I love it so much!
Used to use HiAlgo Chill and Boost in Skyrim at the same time. Hope he finally brings the ability to use both at the same time back. (the man behind the HiAlgo name got hired by AMD, hence why they added Boost and Chill)
Love these sort of deep dives, thanks for all your work
The MCD for this makes me think of how this might help the Ryzen. I wonder if it would hurt or help Ryzen to either eliminate the L3 cache per CCD and put the L3 on the memory controller like an MCD or eliminate the per core L2 and make the current L3 into a shared L2 per CCD and have the L3 on an MCD. Then instead of an IO die on the package, have an MCD and a separate IO die that would just have the PCIe and USB/SATA controllers separate that can be done with older manufacturing processes. What do you guys think?
The latency would kill it, gpus aren’t super latency sensitive
Bingo. Bandwidth is king for GPUs@@phantoslayer9332
Makes me really excited for the future of AMD tech
When I think Gamers Nexus:
I think Quality, Integrity and Trustworthy. Thank you for doing content that's truly amazing
@@SoficalAspects aye, you understand
I'm not an architect either, but I very much liked the info here 👍 Hammertime!
we are less than one week from the actual release.
I'm in so hype, can't wait for it to somehow get close to beat Nvidia.
costing 600 less is already beating NVIDIA. How close xtx gets to 4090 is just gravy
Who else couldn't take their eyes off the careless placement of that hammer on a fine 4080/4090 LOL
Thanks Steve
I'm getting excited about possibly not "having" to get a 4090 for unreal engine and blender workloads, I can't wait for the full review!
You never really had to Blender on AMD works great even on older cards. Same with unreal lmao
@@infernaldaedra I put it in quotes for that exact reason. It doesn't change the fact that works great is a lot different from the 4090 which in very technical terms works twice as great. The 4090 is currently in a class of its own. Myself and others are hoping that AMD is about to change that.
@@JamesonHuddle Yeah basically for the time being. If the 7000 series is even within margin of the 4090 it shouldn't matter.
a very detailed lecture! Thanks GN!
Hoping their drivers match up with the hardware advances they've made. AMD GPUs are looking very promising if the drivers are solid.
The last generation of drivers certainly has been far better than the Vega days!
They're functionally flawless now
I really wish more people would do research before just uttering this again and again everywhere. Their drivers have been rock solid.
Drivers have been okay-good for a while now, ever since they fixed the drivers for the 5700xt series.
@@aerosw1ft I'll second that.. Can't fault the drivers,they have been faultless.
Love these deep dives!
I do enjoy the amd Radeon graphics card for computer. It is good. Soon there will be review. Yay!
Thank you Steve for the detailed session.
Early GN gang?
gang gang
¿ƃuɐƃ Nפ ʎlɹɐƎ
Enabling SAM on my PC brought Forza Horizon 5 from ~100fps to ~125fps. This was the only setting I changed between tests (and reboot to BIOS). I have a 6800xt phantom gaming, 3700x, and 3600 MT/s DDR4. This might be an outlier case with 25% uplift, or perhaps something else was going on. I just booted, ran the benchmark, rebooted to BIOS to enable ReBar, then ran the benchmark again. I did have to use mbr2gpt, but that was done before the first benchmark. Overall very happy with this setup. SOTR 1440p maxed out w/ medium ray tracing was ~116fps. I only tested that after enabling ReBar
Honestly I’m looking most forward to how the arc cards age as driver support improves and the upcoming battle mage gpus. The arc a770 on paper has TONS of transistors and cool features but the software hasn’t been there. I wouldn’t be surprised if in 6 months or so we start to see the a770 match or surpass the 3070 in many performance metrics.
Battlemage GPU. Singular. They got cut down to a single die for laptop
@@shepardpolska no desktop cards?
@@earnistse4899 acording to leaks, those got cut while they fiqure out how to make the GPUs scale to higher EU counts. Moore's law is dead talked about it a couple times
@@shepardpolska wow🤬🤬
@@shepardpolska Moore's Law is Dead is a fake news channel and you should feel bad for even giving him views.
Really interesting, thanks a lot for these deep dive videos !!
So excited for this release, AMD has a real opportunity here
Nice 👍 . Let's wait for real benchmarks!
Also I commend AMD for working on primitives and culling pipelines rather than focusing solely on black magic fuckery like DL aliasing or DL super sampling technologies that may improve performance but do don't perfectty represent the graphical images they are simulating.
100% worth marketing the card size. I won't get a 4080 or 4090 specifically because of the size
Haven't had an AMD GPU in years since they were still ATI. However, with EVGA no longer in the gpu business, I am considering a switch to AMD. My GTX 1070 is getting a little outdated.
Does AMD have any plans for a 7000 series version of the 6800xt? The 7900 looks great, but a little pricey for my budget. If there was a 7800 for around 650 or less, that would be great.
They 7800 cards are expected to be released next year.
Yeah mate, as Ed stated. The ALMOST current number variation of GPU series (7000 series isn't technically out yet, so the "current" series is 6000, which will be "old" models in less than a week) will include the 7800 XT most likely. The flagship models are always released first.
Be aware that AMD GPUs can sometimes be tricky to handle. What I mean are the drivers for them. Regressions happen way more often than with nvidia. I'm not saying that they're crap in any way, I also own and tested few of their recent products already, but it's more common to get silly problems. Not as plug&play as nvidia, but when it works, it works beautifully. Just needs some troubleshooting sometimes and you should heavily test them while refund period lasts, to check if everything works in your case. I've heard Steve saying (about 1 month ago), about their drivers getting way more care than previously, so fingers crossed as I'm also interested to go AMD again.
@@klbk If Nvidia has been plug&play for you, you're lucky. Others have made more problems with Nvidia.
@@sammiller6631 Care to elaborate? What kind of problems did you have so far? Genuine question because I'm actually curious since I didn't encounter anything so far when building PCs. I might google them to get better informed for the future. In case of AMD I've encountered green line artifacting on my RX480 (desktop and games), driver timeouts after updating the driver (most recent case from about a year ago, happened multiple times), green screen crashes on 5600XT, Problems with rendering Chromium UI/tabs on both 5600XT and 5700XT, Anti-lag causing actual stuttering in some games, Enhanced sync causing black screen in games, ReLive replay causing stutters every few seconds while GPU not being 100% used (the same periods of time), driver causing CPU spikes and hanging the whole system for about a second, in every 5 seconds period when you turn off one of the displays in multi-display environment. Some of them got actually acknowledged and fixed, but some got only kinda fixed. The last example is still valid, but now it hangs for maybe 5 times and then system starts working normally again. Back then it was hanging indefinitely until you unplugged the display or turned it back on. These are some of the problems I personally encountered. The only thing that nvidia did horribly wrong for me was releasing a bad driver long years ago, which broke my 9600 GT completely and it wasn't recognizable by the system anymore. I remember this to be more widespread, so it wasn't a coincidence. Also obviously their Linux drivers were always awful and pathetic.
Thank you Steve / GN 🙏 awesome video.
All I need to know at this point is whether or not Steve and the rest of GN are excited about RDNA3 and it's potential. Not from the perspective of reviews, making content etc. Just whether or not you guys have seen or heard things that have you excited about how RDNA3 is gonna match up. Obviously it's not going to be anywhere near as good as far as RT goes but that's perfectly fine by me. I'm seriously considering a 7900xtx to replace my EVGA 3080 ftw3 ultra. After EVGA leaving the gpu market I kind of just want to put it on display to remember the good ole days.
Get a RX 7900 XTX from Sapphire ;). As NVIDIA has always had EVGA as a really great exclusive partner AMD has always had Sapphire as a really great exclusive partner. Or as the ultimate middle finger towards NVIDIA a card from XFX
XFX used to be NVIDIA exclusive ages then shortly made cards for both companies, according to rumours this ticked NVIDIA off (how dare they have the audacity to sell the other brands cards!) and NVIDIA put them on the naughty list - XFX has been a partner to AMD exclusively ever since.
The power connector trolling is epic... 😆
Hey Steve, not a RDNA3 specific question but just wondering, what does the metal bracket around the GPU die do? Why are they only on high-end silicon?
havent watched the video yet but from what you are saying, if its what I think you mean, that is a bracket that is bent so the cooler/ heat sink has the correct amount of pressure on the die.
@@dano1307 You mean the leaf spring retention kit? I meant the metal rectangle on the actual GPU substrate. Does it still do the same thing as the retention kit?
Something about how transparent AMD is about their engineering is
relieving
nvidias lack of 4090s right around christmas could really turn things around for AMD this christmas regarding gpu sales.
I see what you did there with the Nvidia GPU and the hammer on top of it, thanks Steve!
Nvidia has gotten anti-consumer to the point where I don’t really care about their new stuff anymore, wanna give AMD a try and see how good it is! Haven’t had one since the rx580 release, I’m actually excited
Agreed.
Brotha, I've owned a new 6650 XT refresh for about 3 and 1/2 months now, upgrading from a 4 year old (but still totally valid!) 1660ti. My first AMD product in a decade. I've had no driver issues, didn't have to upgrade my 650w PSU, (i7 9700 non-K) and I don't currently have interest in Raytracing (just WAY too hard of a performance hit for lighting and shadow effects so minimal, you'd have to point the differences out to me). It's ALMOST a no-compromise 1080p card. Whatever you play, you'll likely be locked to 144 fps (if that's your monitors refresh rate, that is) At 270 USD, it's been the best PC component purchase I've made in 6+ years. AMDs FSR is also coming a long way and is supported by SO many titles.
If you like DLSS and Raytracing, I'd avoid the low/mid-range AMD cards. Just because I love them doesn't mean that you wouldn't have a better time with those features by going 3060 (12gb) or 3060 Ti. For the love of GOD, avoid that RX 64/6500 and 3060 8gb....
this is definitely going to hash at insane rates, cant wait to test it out!!
I'd wager a Ryzen 7900x with a Radeon 7900 XT is a potent combination while saving a few hundred bucks by not going balls to the wall with a Ryzen 7950x and Radeon 7900 XTX.
well i am going balls to the wall this round, i just hope to be able to snipe one 7900XTX at release date, i've already pre-ordered an EKWB block for that GPU. that being said i think your approach is far better than mine, saving money while keeping a high-end gaming experience.
Potent and saving money? Then try 7700X, imho 7900X doesn't fit the criteria.
@@snakkosss5380 There's nothing wrong with getting the absolute best that AMD has to offer, I salute you lol.
I thought about going with a 7950X/7900 XTX combo but the savings of a couple hundred bucks can go towards more SSD storage or more ram. Most modern games still aren't designed to take advantage of anything more than 8 CPU threads (with some exceptions) so I thought the 12 core 24 thread 7900x is more than enough while still being very good at content creation tasks like video editing and rendering which are very CPU thread hungry.
I plan on overclocking the 7900 XT to have the same core clock speed of the 7900 XTX. So instead of the 7900 XT being 10% to 15% slower than the 7900 XTX it'll probably only be 5% to 7% slower which I can live with. With the money savings I'll be able to go from 32GB of ram to 64GB which will be very useful for my needs outside of gaming.
@@Micromation It's not just for gaming, I need more core count for video editing/encoding as well as realistic 3D rendering for professional use.
@@03chrisv I'm 3D artist myself (Blender, ZBrush, Substance) - I actually can't remember when was last time I rendered anything on CPU 🤔 Previously CUDA and now OptiX is just way faster in pretty much every scenario. No idea how hardware support is in video editing software though. I'm actually getting AMD GPU for gaming now and keep my Titan in eGPU enclosure and render using that instead. (unfortunately AMD cards are completely useless for that for the last decade or so...)
Can’t wait til AMD releases their XRX 7990X XTXX gpu.
I dont know why nobody is talking about the 7000 series GPU driver timeout issues with DirectX 12 games
Pretty sure Im buying one of these 7900xtx
The only reason I would struggle to buy an AMD GPU is my very poor experience with an ATI card way back in the day, plus horror stories about driver problems in the present. A thorough examination of wether Radeon drivers are as good as GeForce drivers at this point would be very useful. Could eliminate that concern.
The only things I've heard about driver issues recently was that there are none.
Using 6700xt for 2 months now and I've had no driver problems whatsoever. People really need to actually research before taking everything at face value then spitting them out again everywhere.
Haven't had a driver issue in 4 years. Started on an hd7970, moved to a 580, then 5600xt, now a 6800xt.
Your arguments are stereo type arguments. I think it's more horror when you have a chance to burn your house with melting connectors. Your drivers' arguments are too silly. Lots of games have issues but peoples blame AMD drivers for it. Education is important, don't just repeat stuff you read. Learn about things. Because you're being misled. The fastest Supercomputer in the world runs on all AMD hardware and they're doing important work on it. The next one on AMD CPU / GPU will be faster than all the top 10 Supercomputers combined!! So you think AMD can't make hardware and software that are stable? Get real!
If you decide to buy a Radeon card make sure to uninstall the NV driver before you make the switch. Preferably with DDU.
Culling on the GPU side is really interesting. On the software/CPU side it's often not worth it, because you usually load geometry to the GPU in huge batches and keep them there for more than one frame. Really cool to see AMD working on that stuff.
there are rumor Power color AIB for XT 1300 USD and XTX 1600 USD, no matter how much great thier achitecture are,With that price range I WILL GO FOR NVDIA I not a GPU manufacture's fanboy ,So I will go for what worth my money and AMD FOR 1300-1600 not worth it.
do you realize those "rumours" are just chinese scalpers selling before the launch?
@@SweatyFeetGirl My hope that rumour are false as you mention, I want to buy XTX in up coming 13th If the price are reasonable
Nvidia not worth it for those price ranges either to be fair
@@generaldane it's the best choice if AMD go for same price range , And Nvdia start to low MSRP
@@tochztochz8002 Best choice yeah but still not worth it imo
Thank you very much for speaking slower than usual. This make it way easier to listen to.
I like that light on the left near the Liquid nitro
i love that in the HYPR-RX slide the gpu being used is a radeon 7!
Beautiful analysis, loved it!
I've learned a ton from this, thanks.