I am an Apple fan. And I can use the calculator better. A 50 percent improvement in efficiency doesn’t mean that you can get the same performance at half the power. That would equal a 100% improvement to efficiency. Instead the right math would be 225W times 0,66 for the same performance = around 150W. In order to achieve the speculated 2x increase in comparison to the 5700xt you would need around 300W (225W x 2 x 0,66).
Tbf i've been recommending 800W power supplies to anyone who asks for high end gaming rigs for ages now, and the reason being you need to account for heavy loads on multiple rails, the amperage available is not always ideal, unless you go overkill or go single rail fully modular ( which in my opinion should be the default but what do i know! ) plus, if you don't overload the psu you get a nice benefit of the fan not becoming a tornado.
FYI "Freedesk" is actually FreeDesktop.org, the unofficial standards body for all the plumbing behind the desktop environments on Linux. The organization publishes specifications, but also provide hosting for a lot of the projects that write software for Linux desktop like OpenGL & Vulkan drivers, Window system, audio session managers, network config, etc.
A split memory pool is just asking for problems. Look at Fury when that 4gb buffer was saturated performance suffered when it had to swap data from system ram/hdd. AMD had to patch into new drivers with every new dx9/dx11 game to prevent over saturation of that 4gb buffer. Also look at GTX 970 once that 3.5gb was satrurated and that 512mb of 24gb/s caused issues as well. Having a smaller HBM pool with GDDR6 as a multi level cache system will be hard to juggle especially with every new game that comes out.
Fury didn't have HBCC and Big Navi will have more than 4GBs. To the developer it would look like one big memory pool. I'm not saying they're going with a split memory system.
@@philippengl2342 Problem is the transition from the memory pools losing hundreds of mb/s will cause stuttering going from HBM to GDDR6 to DDR4 to grab data. Their going to have make sure to keep the GDDR6 ahead of the HBM
@@DrRachelRApe Fury did have HBM and the sudden drop in bandwidth having to grab data off of DDR3/4 caused performance drops and stuttering until AMD patched in a buffer profile for the game. Unless AMD has created a more automatic caching system into RDNA to combat the sudden drop in bandwidth going from HBM/HBC to GDDR6 from DDR4. Devs are going to have to keep it mind since alot more hardware coding is put onto them with DX12/Vulcan. Or AMD is going to have alot of work patching game profiles.
@@908967 Just to note : HBCC and HBM are two things... And the idea would be that the most important data would be in HBM. The less important but needed data in GDDR6 and "hopefully" nothing in system RAM. If GDDR6 is fast enough by itself to empower for a GPU, then HBM + GDDR6 would be faster in most case then GDDR6 alone, if handled well. (thus if memory is not in HBM, it gets it from GDDR6 and flag it has potential to move to HBM). A good memory design and a bit of flagging on developers side would make it perfect. And since the high speed SSD on next gen console can be used has "ram" (to texture are loaded from there) it would be surprising at all if developers already could flag ressource as more important to preload it. The requirements are done and tested, now the question is : is it worth having a more complex memory unit, with a bit more latency to detect the location of the memory, and potential issues with memory traversal and large texture, etc... And how to market that safely, 4Gb of HBM + 8Gb of GDDR6, for exemple would have the same capacity, probably has a pure 12Gb, due to wanting locality for predictability of access time on access of the next virtual memory address. I would love to see HBM + GDDR6, just to "play around" to try to understand how the memory controller works... I doubt it, especially for gaming, since I fear the added complexity would make it worth it.
In a podcast with Adored Tv and Tom at Moore's Law 5 months ago, Adored dropped some hints that the Big Navi architecture can utilize both memory types.
If true, it would make me EXTREMELY HAPPY and not sure I understand why people will be surprised. Also not sure why people would beileve that most people buying the flagship consumer grade GPUs actually care about power consumption. Or that HBM was so drastically more expensive than GDDR6. People also seem to forget that the Vega 10 and Vega 20 were all came with minimum of 8GB HBM and were intended to retail at MSRPs from $500 to $1500 for Vega 56 all the way up to Vage Frontier Edition. The Radeon VII retailed for $800 and was a step up from the Frontier Edition with a few more features. HBM production is ramping up now compared to when Vega first came out. There is nothing remotely odd about a SKU or 2 of Navi2X using HBM nor will it be any more expensive than what we have seen in the past in AMD fconsumer grade lagships. AMD have also not really rolled a technoogy back after presenting it in previous generations of GPUs. HBM came with the Fury GPUs and has persisted going from 4GB HBM in the Fury to now 16GB in the Radeon VII. Radeon VII was AMD's flagship for 2019 despite what people say about the RX 5700XT. In fact the Navi 10/RDNA 1 was such a failed architecture that it has never been marked as compatible with ROCm like all other AMD GPUs. Personally want to believe that AMD are going to double down even more with HBM and increase the VRAM from 16GB HBM to 32GB HBM on the RX 6900XT with the RX6900 getting 16GB. Fingers crossed!
That metallic drdrdrdrdr sound in the background(hope that made sense, not easy to describe) it sound like a bad fan or old HD working overtime. It gets a bit tiring to listen to after a while. Would be great if you could remove it somehow in your future videos.
Paul, your conclusions about cache & memory latency from Vega to Navi is correct. However I can see how you were confused looking for numbers to back that up. Both architectures had 16kb first level cache per CU, and 4MB L2 cache. Navi changed the L1 to an "L0" cache in each CU, do then added a shared 128Kb "L1" cache per dual compute unit as part of their multi-level cache redesign. They also doubled the cache bandwidth of the L0 cache to keep the instruction pipeline full to the beefier scalar in Navi.
I still think you need to start from the 52 cu of the Xbox series x as a base as that is (kinda) RDNA2 rather than 5700 xt. 52 @ 1875Ghz is (guesstimate) 180 Watts....+50% on this is 270 Watts and 78cu....
Paul. Think you're a legend man ! Love the way you just tell it like it is ! My previous choice of buying a high end G-Sync monitor may just come back to bite me in the arse. If big navi can please all audiences and it can do 100Hz at 3440 by 1440 I'll be a happy man. Otherwise I'll just stick to my nvidia cards.
advantages of hbm: -Less energy consuption -Less space needed (both board and die) -Higher total bandwith possible -Higher Signal integrity due to lower speed and shorter wires disadvantages of hbm: cost
AMD has always built GPUs that ran efficient at the low end of the power curve- then they'd push the frequency up to run comparable to Nvidia's mid range- looks like with RDNA2 and 7nm they dont have to blow the power curve. Hmmmmm
Hopefully. AMD has a bad habit of late of pushing the lithography for all it's worth, voltage and clock. Voltage improves yields, so I see why they do that but it still makes the chips run hotter than they have any business doing. Back off a little bit, let overclocking be the reason the chips run 75*C, not stock speeds.
I've been undervolting AMD GPUs for a while and always got the feeling 7nm (smaller nodes)/the bottom of the power curve was what they've always been preparing for as the differentiator between Intel and Nvidia- I'm so far half right
@@ash98981 Nvidia cherry picks their dies with binning, hence why you see a selective group of fans always defending them that their cards are the best or works best... Then you have a group of those who gets the worst binning for 3rd party cards and says it's bad. AMD doesn't want to get involved with binning so much so they set the voltage standards to a level everyone must be at, so they all don't break or whine about the card not under or over performing on stock. This is why Nvidia founder's editions are always 'high end' performers, they are definitely binned like Intel Extreme CPU parts.
yeah remember the day the rx 480 was a 150w card (actually a little bit higher) but they push it so far that the rx 590 end up at a 225w card that only about 20% faster
you are right. The cache level was raised to 2x16 kb because the basic block now is a WGP (workgroup) consisting of 2CUs, thats why you calculate by 2X factor. So in each cycle now each workgroup can draw 2 waves of FP32 with Navi. Also double the L2 to 8Megs. I think they needed this to keep all CUs fed and not stalling waiting for other groups to finish processes. The logical is lower Sku's to have Gddr6 and top ones (2?) to go for HBM. Perf/watt is stated but it derives from multiple factors, process 7+ , nV is stuck on 7 for now, μArch advances design etc that brings IPC.. so its a tradeoff of many worlds to achieve targets. IPC 10% Process 10-15% And then you have scale, power and speed (Ghz). So finally a 225w navi 10 @ 1800Mhz with a die of 251mm2 7nm.. can it be now 450-500mm2 7+ 200-230w @ 2000+Mhz. Idk maybe. It seems plausible
502 mm2 EUV 7nm die with 16GB HBM2 makes a lot of sense. Problem is the price. This GPU would probably cost around $1,100 if price of 313 mm2 die R7 is of any reference. Then again why not, Ngreedia's top dog will cost the same. I guess pricing is OK if AMD can beat it in raw performance.
@Ignatius David Partogi I don't think we know that info yet. N7+ has better yields (less errors) so is cheaper to produce. I'd imagine AMD would prefer it over N7P
Hopefully it will be much, much more expensive than $1100! That would make Jensen to make cartwheels! And seing leadher jacket to make cart wheels is worth of it ;)
If there are two versions with one using GDDR6 and the other HBM2, I'm going HBM2. The reason is the bandwidth. I have gone through the R9 Nano, Vega 64 and Radeon VII. The one thing I can say with certainty is that lag is virtually nonexistent. The VII may trade blows with the 2080, and more times than not the 2080 may beat it by about six percent when not overlooked, but there is a difference in the lag. Look at video comparisons. Don't just pay attention to the FPS but also the lag. That HBM makes a difference. Also, I'm not sure, but do the images tend to look better on AMD? It seems like Nvidia has some sort of weird image compression. Could be wrong, but I'm not sure.
You are right, I constantly switch from AMD to Nvidia and out of the box the image quality of Nvidia cards is atrocious. Especially with freesync monitors. If you tinker with that primitive GeForce graphic interface driver, the image can be improved, actually really, really well, but then the speed become less... On my 34uc88b, I can't completely remove the screen tearing, with the GTX 1070ti, that was non existent on any of the Radeon cards. That lag that you are talking about I've noticed myself, but it goes away if you overclock the vram on the GeForce.
Here you go Paul, if you fancy a read re RDNA and all before it medium.com/high-tech-accessible/an-architectural-deep-dive-into-amds-terascale-gcn-rdna-gpu-architectures-c4a212d0eb9 It's really interesting to see from VLIM to GCN to RDNA Interesting that Navi has L0 cache, lots and lots of optimisation went into getting all cu's working at all the time as you said changes to wave sizes to 32, and SIMD size to 32, so 2 waves can run per cycle rather than 1 wave every four cycle with vega. But RDNA also the ability to merge waves into 64 and run in backwards comp mode for well PS4 of course. I think RDNA 2 might have further changes to CU's but it will need to keep the 64 wave mode for PS4.
In my region the difference between rx 5700 xt and rx 5700 (none xt) is 10 eur. Whilst 5700xt price starts at 400 eur. I think 10 eur is worth spending.
If power is a concern then HBM doesn’t fit your use case... even Apple has been stupid by integrating HBM into a laptop regardless of the fact that HBM is at the opposite end of the spectrum when your top priorities are power, complexity and cost... I’m seriously dumbfounded at the engineering choices companies are making, looks like marketing have taken over. GDDRxx hits all of the above, you can integrate them onto an interposer as Samsung makes a variant that is exactly this.
@@Carnyzzle More like lack of competition on the top end, while colluding to artificially increase them together; companies have already been caught doing as such
i do not want to be tedious but your math at 11:20 is wrong. Let's make a point : if the 5700xt has a value of performance '100' at 225w +50% more eff./watt means that has '150' performance with 225w , and if we assume a best case scenario where power scales linearly with performance (which is wrong because it doesn't scale linearly , more often you need more power) , we will have at best the same performance of an 5700xt with 2/3 the power ,which is 150w. So is not that good like you said , with 112w for the same performance... But it's still very good :)
Hopefully it would cost more, much much more! Jensen would be envy for the price point ;)
4 года назад+1
@JustVictor 17 we know nothing about CDNA, but I expect many more "shallow" CUs than in RDNA/2. It will focus on computing power and it makes sense to write application for specific hardware in computing applications. So it won't face generalization problem of game engines.
Have a i7 9700f with rtx 2070 and 600w psu. And only get up to around 500 watts. You're saying you think I can keep my 600w psu with big navi? I'm for sure going with big navi but always thought I'd have to get a bigger psu.
Tin Foil Hat time: I wonder if NVidia saw the writing on the wall way back when Polaris launched? They saw AMD's node progression and knew that Navi was in the pipe. So they followed up the GTX10 series with Ray Tracing enabled cards that couldn't really do more than give a glimpse of what was to come. They did that with the 20 series of cards and we all know unless you had the absolute top end cards either the 2080ti or Titan RTX itself you really couldn't use Ray Tracing without taking a massive performance hit making your games a stuttering mess. The two top of the line cards could do it with a massive performance hit but they had enough raw grunt to get it done at playable frame rates of a consistent 60+ frames. They did this with the hopes of shifting the narrative away from raw performance onto "Features" because they knew Navi was going to take the outright performance crown eventually. Nvidia being Nvidia knew that with their market share they had the ability to sell these new Features to the masses as something they couldn't possibly game without even though at launch the performance was abysmal. They've since gotten it mostly ironed out and have found out exactly what it's going to take hardware wise on the RTX side of things. AMD in the meantime will be launching what I believe will actually be the fastest graphics cards on the market but fall behind in Ray Tracing and there's the rub. Even though AMD's going to be demonstrably faster at gaming Nvidia's going to use the Features Narrative to bludgeon them over the head yet again...
I don't think Nvidia would have anticipated that early that Navi would be this good. I do agree with most of what you've said tho. I think AMD might win in standard rasterized performance but they'll lose in RT and DLSS alternatives.
The delusion is real. Amd has been beaten solidly for 5+ years now. And nvidia are worried because big navi? This isnt ryzen versus intel cpus. Amd is going to get destroyed again.
If AMD thinks they will win I can easily see them using HBM simply because -they can-. When they are selling the top GPU(s) they can automatically charge more and this gives them room to add HBM which will secure their sales even more. Now I probably could have worded that better but TLDR; HBM in AMD GPUs = yes, if they are feeling confident. Top card = high price with room for some improved memory.
Well, it will be cheaper so it can be expensive then? Without HBM and GDDR6, it's viable to make Big Navi fast but cheaper at a ridiculous price if you focus on performance in other form factors, but adding those in and you ramp up prices to Nvidia levels, just to show off?
We still don't see MCM being practical yet for a couple of reasons, one being that they are trying to sell us monolithic dies at a decent pricing right now, so there isn't a huge urge to shove us all onto MCM products. If they can scale to two dies, then they can scale to 4, 8, 16, etc. This would mean that their products would EASILY, and instantly be rendered obsolete in less than one generation if they push for them too fast. Like say if efficiency per scaling is the exact same or better, selling a GPU with 2 dies at a cost less than 2x of a monolithic GPU, will be a huge loss in profit in their investor's point of view. Why sell two dies in a GPU when you can sell two GPUs? Basically. If you wanna see MCM rushed out and polished a lot, you need huge amount of competition to force people to upgrade for cheap, or a huge amount of software that 'push' people to upgrade to be 'compatible'. Either way, it's pro-consumer, or pro-corporate.
@@shehrozkhan9563 AMD are though... They're refreshing the 5700 XT as the mid range card for RDNA 2 at a reduced price. Paul has mentioned this numerous times as well. It also makes complete sense because it should have always been 300 XD. It launched at 450 because it was beating the 2070 before SUPER was launched. It got dropped to 400 to win over consumers. You're telling me they wouldn't drop the 5700 XT to 300 under the RDNA 2 refresh to compete with the 3060? I will say, I meant to list it as 8GB of VRAM 😂.
@@dainluke Okay now it makes more sense. A 5700 xt refresh with 8 gb ram for 300, I can see that. Although I think their highest end navi will be like $1000. Also yeah I do agree, the 5700 series SHOULD HAVE been Polaris replacements, not Vegas.
@@Android-ng1wn ye the lg OLED support both as they are free sync supported and g sync compatible but indont think my monitor supports free sync only g sync. I'd swap to and in a heartbeat if it supported freesync aswell
@@samuelkdu ye thanks, I'm aware of that. I use g sync on, v sync on in control panel, and RTSS to cap frame rate to a rate within the g sync range. So for me g sync (or free sync) is essential.
@@No-One.321 You talking about Apple Radeon Pro 5600M ? Am saying that there's a 6600M with HBM for Apple. But doesn't indicate that PC will get HBM...
@@pete2097 what do you think apple is? So rdna using both gddr6 and hbm doesn't mean that PC can't get hbm? Only someone who is an nvidia fanboy would come to that conclusion with that logic. Because a normal unbiased person would think ok so the 5000 series uses gddr6 and apple is getting an rdna with hbm so it means it's very likely that in a higher end gpu would be the hbm memory at least.
@@No-One.321 Ha no i just think that RDNA2 might not need hbm, and i've a 5700 xt, so save the fan boy comments. That just makes you look silly. Nvidia doesn't need HBM, so why should AMD go with it? It's expensive for what it is. If you recall there are a few lite versions of rdna2 and I would say one will be apple with HBM...
I am sure its been said. But your 50% math is incorrect. You have to start with the lower number not the final number. So.... so if navi 10 used 225watts. Nav 2 at same performance will use 150watts. 150w x 150% is 225w.
You'd need to run at 2.44GHz to get to 20TFLOPs with just 64 CU's. That's not realistic. There probably will be a 64CU card, but it won't be the biggest, and it won't be running at >2.4GHz.
My 5700xt at 97w TBP runs around 12-1300mhz. In furmark (960x540) this is a difference between 240fps and 185FPS, but also basically puts my fans at idle(768RPM), and temps down below 60c and junction temp under 70c Now obviously, this is using half the power, but still ~70% of the original performance 900p furmark is seeing 130fps for 200w, and only drops down to 95FPS at 101w, you can double the power draw, for 30% more performance, or should you double the CU count? AMD bet on a smaller die, because that didnt cost anything. and at the higher power draw, it was competeing with 2080. i could see an 80 CU 5900xt, pulling only 220w at 1400mhz, it would likely have been more powerful than a 2080TI, not in every game, because frequency still matters in some games. Like i have been saying sinse the beginning, the 5700xt, was meant to compete with the 1660(mostly because of die size), but then they realised it overclocked well, plopped more RAM and better cooling, and it, for a time, competed and sometimes beat the 2080., AMD even said they werent going to produce a high end card, they surely had test samples, but figured the 5700xt would be enough, because it was beating nvidia's 2nd best, at a cost to them closer to nvidia's middle road cards, even with the reletively high end RAM, VRM, and cooling.
@@malathomas6141 www.tomshardware.com/reviews/glossary-hbm-hbm2-high-bandwidth-memory-definition,5889.html What Are HBM, HBM2 and HBM2E? A Basic Definition
The chances there isn't a Halo Navi is pretty slim. 64-80CUs 1024x2bus, 16 gigs of 2E. Really the only question is does it come out now to bury nvidia, or come out when nvidia announces 7nm Super Amperes.
Cool story bro but it's pretty damn clear nvidia sat on their ass this gen and your delicious tears won't change that all that much. I've bought team green for the last three gens, but the chances of that happening this time are p much zero.
Sever Sheen $2449 to $3999 for HBM memory version that would be fun! ;) Nvidia and Jensen would be green from envy, because amd would have bals to be more expensive than Nvidia!
Ppl begged them the same, to go all out with rdna1 as well but they didn't. Just put out 40cu cards that performed roughly 5% more than 2060 and 2070 respectively and called it a day.
Age_of_fire I couldn’t agree with u more. U see all Nvidia gamers playing games like Fallout 4 and Witcher 3 on sli 2080 ‘Tis 8k maxed out graphics. Meanwhile the 5700 XT barely manages to hit 4K on more demanding titles.
Don't care how AMD does it whether with HBM or GDDR6/6X, Just give me something that is 50% faster than my 2080Ti and I'll buy it providing Nvidia doesn't beat it!
@@kumbandit 'Math' is a shortened form of 'mathematic'. 'Maths' is a shortened form of 'mathematics'. You don't 'do the math(ematic)' - you 'do the maths'. You don't take 'math' class at school. This would imply the class itself is mathematic(al) in nature. The class is not mathematic(al). The subject that is being taught inside the classroom is mathematics.
@Ryan Hills I had only AMD CPUs since November 2019 - 3600x and 3950x in main PC and also I have only AMD GPUs rn - in both PCs 5700xt, I had 2080ti but i sold that card like month ago because I know it will lose value soon
Bro, 128 x 2 is 256... I've never heard of a 248 bit bus. ???? 348? Yea... I wish the youtubers would stop wirh the speculation videos. Totally played out. Het new material, please.
All this talk about memory got me wondering why we have no standardized benchmarks for gpu bandwidth. I understand most people testing cards internally can't really open them up and look and what's on the pcb, but surely they can run a simple benchmark besides popular ones with arbitrary scales that tell us nothing. A simple google search brought up a couple: github.com/ekondis/gpumembench github.com/UoB-HPC/BabelStream This the new camera?
@@faceplants2 yeah using headphones. The last few videos have almost been unbearable to watch. I love the videos but the constant buzzing makes it tough to sit through 20 minutes straight of that regardless of how good the video is lol..
I am an Apple fan. And I can use the calculator better. A 50 percent improvement in efficiency doesn’t mean that you can get the same performance at half the power. That would equal a 100% improvement to efficiency. Instead the right math would be 225W times 0,66 for the same performance = around 150W. In order to achieve the speculated 2x increase in comparison to the 5700xt you would need around 300W (225W x 2 x 0,66).
You couldnt be more right...!! Double the performance with ~33% power increase that is... around 300W.
HOLY SHIT THE CAMERA LOOKS INSANELY DIFFERENT. The colors look more accurate, people say you look orange but it looks better.
Tbf i've been recommending 800W power supplies to anyone who asks for high end gaming rigs for ages now, and the reason being you need to account for heavy loads on multiple rails, the amperage available is not always ideal, unless you go overkill or go single rail fully modular ( which in my opinion should be the default but what do i know! ) plus, if you don't overload the psu you get a nice benefit of the fan not becoming a tornado.
FYI "Freedesk" is actually FreeDesktop.org, the unofficial standards body for all the plumbing behind the desktop environments on Linux. The organization publishes specifications, but also provide hosting for a lot of the projects that write software for Linux desktop like OpenGL & Vulkan drivers, Window system, audio session managers, network config, etc.
A split memory pool is just asking for problems. Look at Fury when that 4gb buffer was saturated performance suffered when it had to swap data from system ram/hdd. AMD had to patch into new drivers with every new dx9/dx11 game to prevent over saturation of that 4gb buffer. Also look at GTX 970 once that 3.5gb was satrurated and that 512mb of 24gb/s caused issues as well. Having a smaller HBM pool with GDDR6 as a multi level cache system will be hard to juggle especially with every new game that comes out.
Fury didn't have HBCC and Big Navi will have more than 4GBs. To the developer it would look like one big memory pool. I'm not saying they're going with a split memory system.
@@philippengl2342 Problem is the transition from the memory pools losing hundreds of mb/s will cause stuttering going from HBM to GDDR6 to DDR4 to grab data. Their going to have make sure to keep the GDDR6 ahead of the HBM
@@DrRachelRApe Fury did have HBM and the sudden drop in bandwidth having to grab data off of DDR3/4 caused performance drops and stuttering until AMD patched in a buffer profile for the game. Unless AMD has created a more automatic caching system into RDNA to combat the sudden drop in bandwidth going from HBM/HBC to GDDR6 from DDR4. Devs are going to have to keep it mind since alot more hardware coding is put onto them with DX12/Vulcan. Or AMD is going to have alot of work patching game profiles.
@@908967 Just to note : HBCC and HBM are two things...
And the idea would be that the most important data would be in HBM. The less important but needed data in GDDR6 and "hopefully" nothing in system RAM. If GDDR6 is fast enough by itself to empower for a GPU, then HBM + GDDR6 would be faster in most case then GDDR6 alone, if handled well. (thus if memory is not in HBM, it gets it from GDDR6 and flag it has potential to move to HBM). A good memory design and a bit of flagging on developers side would make it perfect. And since the high speed SSD on next gen console can be used has "ram" (to texture are loaded from there) it would be surprising at all if developers already could flag ressource as more important to preload it. The requirements are done and tested, now the question is : is it worth having a more complex memory unit, with a bit more latency to detect the location of the memory, and potential issues with memory traversal and large texture, etc...
And how to market that safely, 4Gb of HBM + 8Gb of GDDR6, for exemple would have the same capacity, probably has a pure 12Gb, due to wanting locality for predictability of access time on access of the next virtual memory address.
I would love to see HBM + GDDR6, just to "play around" to try to understand how the memory controller works... I doubt it, especially for gaming, since I fear the added complexity would make it worth it.
A 50% performance improvement isn't the same going backwards. If you add 50% and want to get back to the original number you subtract 33%
Not an Apple fan...featuring Timmy Joe. Lol.. nice manly beard bro.
timmy joe makes videos about computers on the internet :) love his channel
In a podcast with Adored Tv and Tom at Moore's Law 5 months ago, Adored dropped some hints that the Big Navi architecture can utilize both memory types.
Camera is wobbly and the mic sounds like it's picking up more of your fan/coil whine than ever.
If true, it would make me EXTREMELY HAPPY and not sure I understand why people will be surprised. Also not sure why people would beileve that most people buying the flagship consumer grade GPUs actually care about power consumption. Or that HBM was so drastically more expensive than GDDR6. People also seem to forget that the Vega 10 and Vega 20 were all came with minimum of 8GB HBM and were intended to retail at MSRPs from $500 to $1500 for Vega 56 all the way up to Vage Frontier Edition. The Radeon VII retailed for $800 and was a step up from the Frontier Edition with a few more features. HBM production is ramping up now compared to when Vega first came out.
There is nothing remotely odd about a SKU or 2 of Navi2X using HBM nor will it be any more expensive than what we have seen in the past in AMD fconsumer grade lagships. AMD have also not really rolled a technoogy back after presenting it in previous generations of GPUs. HBM came with the Fury GPUs and has persisted going from 4GB HBM in the Fury to now 16GB in the Radeon VII. Radeon VII was AMD's flagship for 2019 despite what people say about the RX 5700XT. In fact the Navi 10/RDNA 1 was such a failed architecture that it has never been marked as compatible with ROCm like all other AMD GPUs. Personally want to believe that AMD are going to double down even more with HBM and increase the VRAM from 16GB HBM to 32GB HBM on the RX 6900XT with the RX6900 getting 16GB. Fingers crossed!
That metallic drdrdrdrdr sound in the background(hope that made sense, not easy to describe) it sound like a bad fan or old HD working overtime. It gets a bit tiring to listen to after a while. Would be great if you could remove it somehow in your future videos.
Dislike the video so that he notices. Thats what I have been doing
Paul, your conclusions about cache & memory latency from Vega to Navi is correct. However I can see how you were confused looking for numbers to back that up. Both architectures had 16kb first level cache per CU, and 4MB L2 cache. Navi changed the L1 to an "L0" cache in each CU, do then added a shared 128Kb "L1" cache per dual compute unit as part of their multi-level cache redesign. They also doubled the cache bandwidth of the L0 cache to keep the instruction pipeline full to the beefier scalar in Navi.
I still think you need to start from the 52 cu of the Xbox series x as a base as that is (kinda) RDNA2 rather than 5700 xt.
52 @ 1875Ghz is (guesstimate) 180 Watts....+50% on this is 270 Watts and 78cu....
Paul. Think you're a legend man ! Love the way you just tell it like it is ! My previous choice of buying a high end G-Sync monitor may just come back to bite me in the arse. If big navi can please all audiences and it can do 100Hz at 3440 by 1440 I'll be a happy man. Otherwise I'll just stick to my nvidia cards.
advantages of hbm:
-Less energy consuption
-Less space needed (both board and die)
-Higher total bandwith possible
-Higher Signal integrity due to lower speed and shorter wires
disadvantages of hbm: cost
AMD has always built GPUs that ran efficient at the low end of the power curve- then they'd push the frequency up to run comparable to Nvidia's mid range- looks like with RDNA2 and 7nm they dont have to blow the power curve. Hmmmmm
Based on the rumors this time around it'll be Nvidia blowing out the power curve and trying to make a beefy enough cooler to keep it quiet.
Hopefully.
AMD has a bad habit of late of pushing the lithography for all it's worth, voltage and clock. Voltage improves yields, so I see why they do that but it still makes the chips run hotter than they have any business doing. Back off a little bit, let overclocking be the reason the chips run 75*C, not stock speeds.
I've been undervolting AMD GPUs for a while and always got the feeling 7nm (smaller nodes)/the bottom of the power curve was what they've always been preparing for as the differentiator between Intel and Nvidia- I'm so far half right
@@ash98981 Nvidia cherry picks their dies with binning, hence why you see a selective group of fans always defending them that their cards are the best or works best... Then you have a group of those who gets the worst binning for 3rd party cards and says it's bad.
AMD doesn't want to get involved with binning so much so they set the voltage standards to a level everyone must be at, so they all don't break or whine about the card not under or over performing on stock.
This is why Nvidia founder's editions are always 'high end' performers, they are definitely binned like Intel Extreme CPU parts.
yeah remember the day the rx 480 was a 150w card (actually a little bit higher) but they push it so far that the rx 590 end up at a 225w card that only about 20% faster
you are right. The cache level was raised to 2x16 kb because the basic block now is a WGP (workgroup) consisting of 2CUs, thats why you calculate by 2X factor. So in each cycle now each workgroup can draw 2 waves of FP32 with Navi. Also double the L2 to 8Megs. I think they needed this to keep all CUs fed and not stalling waiting for other groups to finish processes.
The logical is lower Sku's to have Gddr6 and top ones (2?) to go for HBM.
Perf/watt is stated but it derives from multiple factors, process 7+ , nV is stuck on 7 for now, μArch advances design etc that brings IPC.. so its a tradeoff of many worlds to achieve targets.
IPC 10%
Process 10-15%
And then you have scale, power and speed (Ghz).
So finally a 225w navi 10 @ 1800Mhz with a die of 251mm2 7nm.. can it be now 450-500mm2 7+ 200-230w @ 2000+Mhz. Idk maybe. It seems plausible
Congrats on reaching 20K!
Both AMD and Nvidia are playing a very good game of chicken!
Edit: New camera? You look a bit orange 😉
Hahaha. I thought that too.
He just acidently baught trump branded taning spray.
It's cuz he's orange
Facial hair is expanding
I actually thought he said a new camera was supposed to be delivered so maybe.
502 mm2 EUV 7nm die with 16GB HBM2 makes a lot of sense. Problem is the price. This GPU would probably cost around $1,100 if price of 313 mm2 die R7 is of any reference. Then again why not, Ngreedia's top dog will cost the same. I guess pricing is OK if AMD can beat it in raw performance.
@Ignatius David Partogi I don't think we know that info yet. N7+ has better yields (less errors) so is cheaper to produce. I'd imagine AMD would prefer it over N7P
Hopefully it will be much, much more expensive than $1100!
That would make Jensen to make cartwheels! And seing leadher jacket to make cart wheels is worth of it ;)
R7 was on 7nm when yields were ABYSMAL. So not a great reference.
Hbm2e isnt that much more than ddr6
If there are two versions with one using GDDR6 and the other HBM2, I'm going HBM2. The reason is the bandwidth.
I have gone through the R9 Nano, Vega 64 and Radeon VII. The one thing I can say with certainty is that lag is virtually nonexistent. The VII may trade blows with the 2080, and more times than not the 2080 may beat it by about six percent when not overlooked, but there is a difference in the lag.
Look at video comparisons. Don't just pay attention to the FPS but also the lag. That HBM makes a difference.
Also, I'm not sure, but do the images tend to look better on AMD? It seems like Nvidia has some sort of weird image compression. Could be wrong, but I'm not sure.
your talking about input lag?
You are right, I constantly switch from AMD to Nvidia and out of the box the image quality of Nvidia cards is atrocious. Especially with freesync monitors.
If you tinker with that primitive GeForce graphic interface driver, the image can be improved, actually really, really well, but then the speed become less...
On my 34uc88b, I can't completely remove the screen tearing, with the GTX 1070ti, that was non existent on any of the Radeon cards.
That lag that you are talking about I've noticed myself, but it goes away if you overclock the vram on the GeForce.
Congrats to 20k subscribers Paul! Greetings from Germany!
You can see how much calmer he is when the soda isn't there. His cognitive abilities are clearly impaired but his stress levels are greatly reduced.
best irish techtuber ever!
Thanks for doing what you do Paul
Best techtuber period
Here you go Paul, if you fancy a read re RDNA and all before it
medium.com/high-tech-accessible/an-architectural-deep-dive-into-amds-terascale-gcn-rdna-gpu-architectures-c4a212d0eb9
It's really interesting to see from VLIM to GCN to RDNA
Interesting that Navi has L0 cache, lots and lots of optimisation went into getting all cu's working at all the time as you said changes to wave sizes to 32, and SIMD size to 32, so 2 waves can run per cycle rather than 1 wave every four cycle with vega.
But RDNA also the ability to merge waves into 64 and run in backwards comp mode for well PS4 of course.
I think RDNA 2 might have further changes to CU's but it will need to keep the 64 wave mode for PS4.
i dont care wich memory, just price/performance matters.
They said when they introduced Navi 10 that this architecture is compatible with both HBM and Gddr6, so yeah. Hbm will be for high end
Congrats for 20k subs.
In my region the difference between rx 5700 xt and rx 5700 (none xt) is 10 eur. Whilst 5700xt price starts at 400 eur. I think 10 eur is worth spending.
Nah dude better spend that 10 bucks on coffee to go🤣
So much we still don't know and won't know until the release.
AMD did try having a Titan like card ( not in performance ) but the $1499 Vega frontier edition liquid cooled is an example.
AMD's last 'Titan' was the 295x2
Why is Obi Wan Kenobi talking to my about computer hardware?...
If power is a concern then HBM doesn’t fit your use case... even Apple has been stupid by integrating HBM into a laptop regardless of the fact that HBM is at the opposite end of the spectrum when your top priorities are power, complexity and cost... I’m seriously dumbfounded at the engineering choices companies are making, looks like marketing have taken over.
GDDRxx hits all of the above, you can integrate them onto an interposer as Samsung makes a variant that is exactly this.
Nobody can afford gpu's prices anymore, we don't care.
I'm gonna have to end up buying a 4700G LOL
People that make good life decisions can just fine
@@Taintedmind
That kind of logic is why companies love taking advantage of the general public LOL
@@Carnyzzle
More like lack of competition on the top end, while colluding to artificially increase them together; companies have already been caught doing as such
Where are you working? My mate works in a warehouse and can still afford these gpu prices.
2048 Paul, not 248 :P
Ohhhhh
For sec i thought he is having a stroke. Started to mumble some nonsense:D
i do not want to be tedious but your math at 11:20 is wrong. Let's make a point :
if the 5700xt has a value of performance '100' at 225w +50% more eff./watt means that has '150' performance with 225w , and if we assume a best case scenario where power scales linearly with performance (which is wrong because it doesn't scale linearly , more often you need more power) , we will have at best the same performance of an 5700xt with 2/3 the power ,which is 150w.
So is not that good like you said , with 112w for the same performance... But it's still very good :)
Its not being tedious, its being accurate. That is important.
120CU big Navi with 32GB HBM2 for $1500 I'd buy it
It's not gonna happen. Such GPU would draw 500-600W of power. Even prosumer GPUs target "only" 300-400W envelope.
Who wouldn't
That would be like $2500-2800
Hopefully it would cost more, much much more!
Jensen would be envy for the price point ;)
@JustVictor 17 we know nothing about CDNA, but I expect many more "shallow" CUs than in RDNA/2. It will focus on computing power and it makes sense to write application for specific hardware in computing applications. So it won't face generalization problem of game engines.
"foon"
That is just adorable.
Have a i7 9700f with rtx 2070 and 600w psu. And only get up to around 500 watts. You're saying you think I can keep my 600w psu with big navi? I'm for sure going with big navi but always thought I'd have to get a bigger psu.
That 20k looks good..... Congrats again.
Its only price/performance that matters
Tin Foil Hat time:
I wonder if NVidia saw the writing on the wall way back when Polaris launched? They saw AMD's node progression and knew that Navi was in the pipe. So they followed up the GTX10 series with Ray Tracing enabled cards that couldn't really do more than give a glimpse of what was to come. They did that with the 20 series of cards and we all know unless you had the absolute top end cards either the 2080ti or Titan RTX itself you really couldn't use Ray Tracing without taking a massive performance hit making your games a stuttering mess. The two top of the line cards could do it with a massive performance hit but they had enough raw grunt to get it done at playable frame rates of a consistent 60+ frames. They did this with the hopes of shifting the narrative away from raw performance onto "Features" because they knew Navi was going to take the outright performance crown eventually. Nvidia being Nvidia knew that with their market share they had the ability to sell these new Features to the masses as something they couldn't possibly game without even though at launch the performance was abysmal. They've since gotten it mostly ironed out and have found out exactly what it's going to take hardware wise on the RTX side of things. AMD in the meantime will be launching what I believe will actually be the fastest graphics cards on the market but fall behind in Ray Tracing and there's the rub. Even though AMD's going to be demonstrably faster at gaming Nvidia's going to use the Features Narrative to bludgeon them over the head yet again...
I don't think Nvidia would have anticipated that early that Navi would be this good.
I do agree with most of what you've said tho. I think AMD might win in standard rasterized performance but they'll lose in RT and DLSS alternatives.
The delusion is real. Amd has been beaten solidly for 5+ years now. And nvidia are worried because big navi? This isnt ryzen versus intel cpus. Amd is going to get destroyed again.
If AMD thinks they will win I can easily see them using HBM simply because -they can-. When they are selling the top GPU(s) they can automatically charge more and this gives them room to add HBM which will secure their sales even more. Now I probably could have worded that better but TLDR; HBM in AMD GPUs = yes, if they are feeling confident. Top card = high price with room for some improved memory.
remember Radeon Pro SSG..
Well, it will be cheaper so it can be expensive then?
Without HBM and GDDR6, it's viable to make Big Navi fast but cheaper at a ridiculous price if you focus on performance in other form factors, but adding those in and you ramp up prices to Nvidia levels, just to show off?
We still don't see MCM being practical yet for a couple of reasons, one being that they are trying to sell us monolithic dies at a decent pricing right now, so there isn't a huge urge to shove us all onto MCM products. If they can scale to two dies, then they can scale to 4, 8, 16, etc.
This would mean that their products would EASILY, and instantly be rendered obsolete in less than one generation if they push for them too fast.
Like say if efficiency per scaling is the exact same or better, selling a GPU with 2 dies at a cost less than 2x of a monolithic GPU, will be a huge loss in profit in their investor's point of view.
Why sell two dies in a GPU when you can sell two GPUs? Basically.
If you wanna see MCM rushed out and polished a lot, you need huge amount of competition to force people to upgrade for cheap, or a huge amount of software that 'push' people to upgrade to be 'compatible'. Either way, it's pro-consumer, or pro-corporate.
Hbm.. so it's gonna make it thousand times the cost then just like the vega cards..
Whenever I watch these videos I crave Lucky charms they’re magically delicious
40 CU/256 bit bus/8GB GDDR6 ($300)
64 CU/384 bit bus/12GB GDDR6 ($450)
72 CU/512 bit bus/16GB GDDR6 ($650)
80 CU/2048 bit bus/16GB HBM2E ($900)
That's my prediction.
Your prices are WAYYYYY TOO low. No way amd will be giving you a 40 cu card with 12 gb ram for 300. Not happening anytime soon
@@shehrozkhan9563 AMD are though... They're refreshing the 5700 XT as the mid range card for RDNA 2 at a reduced price. Paul has mentioned this numerous times as well. It also makes complete sense because it should have always been 300 XD. It launched at 450 because it was beating the 2070 before SUPER was launched. It got dropped to 400 to win over consumers. You're telling me they wouldn't drop the 5700 XT to 300 under the RDNA 2 refresh to compete with the 3060? I will say, I meant to list it as 8GB of VRAM 😂.
@@shehrozkhan9563 No the prices are just right dude lol what r u smoking.
@@dainluke Okay now it makes more sense. A 5700 xt refresh with 8 gb ram for 300, I can see that. Although I think their highest end navi will be like $1000.
Also yeah I do agree, the 5700 series SHOULD HAVE been Polaris replacements, not Vegas.
@@bobpro583 elaborate pls
295x2 is arguably their first titan
The "killer" will have HBM2E as ammunition.
ƾ
Poor PSU, glowing red and sh!7.
hate the mubojumbo, better times,cheers!
If only AMD cards worked with my g sync monitor 😭😭😭
My thinking exactly 👍
I have the predator x27p and I cant get a solid answer I'd it supports free sync. I know it has vrr over hdmi.
@@Android-ng1wn ye the lg OLED support both as they are free sync supported and g sync compatible but indont think my monitor supports free sync only g sync. I'd swap to and in a heartbeat if it supported freesync aswell
if you go over your monitors refresh rate gsync and whatever shit amd uses becomes useless i just use fast sync
@@samuelkdu ye thanks, I'm aware of that. I use g sync on, v sync on in control panel, and RTSS to cap frame rate to a rate within the g sync range. So for me g sync (or free sync) is essential.
Am betting HBM version is for Apple....
That card has already detailed tho. Which shows that rdna can use both types of memory
@@No-One.321 You talking about Apple Radeon Pro 5600M ? Am saying that there's a 6600M with HBM for Apple. But doesn't indicate that PC will get HBM...
@@pete2097 what do you think apple is? So rdna using both gddr6 and hbm doesn't mean that PC can't get hbm? Only someone who is an nvidia fanboy would come to that conclusion with that logic. Because a normal unbiased person would think ok so the 5000 series uses gddr6 and apple is getting an rdna with hbm so it means it's very likely that in a higher end gpu would be the hbm memory at least.
@@No-One.321 Ha no i just think that RDNA2 might not need hbm, and i've a 5700 xt, so save the fan boy comments. That just makes you look silly.
Nvidia doesn't need HBM, so why should AMD go with it? It's expensive for what it is.
If you recall there are a few lite versions of rdna2 and I would say one will be apple with HBM...
@@pete2097 the new apple is rdna not rdna2
Reading with PAUL!
I am sure its been said. But your 50% math is incorrect. You have to start with the lower number not the final number. So.... so if navi 10 used 225watts. Nav 2 at same performance will use 150watts. 150w x 150% is 225w.
Dont get your hopes up too high, it could be all smoke and mirrors and we are more likely to call it Little Navi hahahah GO NVIDIA
Mini Navi!
i tink jenson would make more from the100 if he gave it to us for 1.5 - 2g
Doubt it. The yields are probably disgustingly bad on a die that big and datacenters will probably buy more than gamers would.
nvidia worked with microsoft to get ray tracing in direct x so that is the version that we will see in amd cards
Dxr is microsofts version I believe?
@@kriszhao80 yea nvidia worked with them to develop it. If nvidia didn't get it in direct x it would not be widely adopted.
@@candoslayer RT is working in vulkan now too
Paul been on the Beers? ;-)
Bandwidth is the weakest link so the closer the wider the better memory and if it's directly accessable by GPU's and CPU's in parallel .. fuck .
69! 69! 69!
The 69 series with HBM2e, I bet it’s sells at a1000 dollar
When will you give up Paul? It doesn't matter anymore.
Profile pic checks out?
You're obviously unbiased 😂
Lol you done this same video how many times now?
If they keep saying it enough, it might become true!
Since Navi 10 can use hbm 2 and AMD call it navi 12
I believe AMD can do what ever they want to do.
You'd need to run at 2.44GHz to get to 20TFLOPs with just 64 CU's. That's not realistic. There probably will be a 64CU card, but it won't be the biggest, and it won't be running at >2.4GHz.
I know someone need to talk about this 🤣
AMD is keep tight on the info on big navi!!
5-6 i dont need hand outs
My 5700xt at 97w TBP runs around 12-1300mhz. In furmark (960x540) this is a difference between 240fps and 185FPS, but also basically puts my fans at idle(768RPM), and temps down below 60c and junction temp under 70c Now obviously, this is using half the power, but still ~70% of the original performance
900p furmark is seeing 130fps for 200w, and only drops down to 95FPS at 101w, you can double the power draw, for 30% more performance, or should you double the CU count? AMD bet on a smaller die, because that didnt cost anything. and at the higher power draw, it was competeing with 2080.
i could see an 80 CU 5900xt, pulling only 220w at 1400mhz, it would likely have been more powerful than a 2080TI, not in every game, because frequency still matters in some games.
Like i have been saying sinse the beginning, the 5700xt, was meant to compete with the 1660(mostly because of die size), but then they realised it overclocked well, plopped more RAM and better cooling, and it, for a time, competed and sometimes beat the 2080., AMD even said they werent going to produce a high end card, they surely had test samples, but figured the 5700xt would be enough, because it was beating nvidia's 2nd best, at a cost to them closer to nvidia's middle road cards, even with the reletively high end RAM, VRM, and cooling.
If it has HBM.. my gosh that would be crazy
Yes, because Radeon have totally never done that before...
I have a vega56 and a radon vii in my tower right now... I NEEEED more HBM. I'd buy it in a second....
@@desjardinsfamily5769 Dont forget to buy the liquid nitrogen ;)
@@AnotherAnonymousFag I threw the side panel in the trash... 😂
Hey my Man, could you please reposition your Mic, or something...PLEEEEEEEEZZZZZZ! :)
thats the problem, give it to me
Ive been saying this for a while, HBM2E!!!!
What is it like a more efficient hbm2?
@@malathomas6141 www.tomshardware.com/reviews/glossary-hbm-hbm2-high-bandwidth-memory-definition,5889.html What Are HBM, HBM2 and HBM2E? A Basic Definition
Lisa Su edition🤣🤣🤣🤣. Fanboys line up!
Thy hyper class card should use hbm while the extreme enthusiast and lower should be gddr6
The chances there isn't a Halo Navi is pretty slim. 64-80CUs 1024x2bus, 16 gigs of 2E. Really the only question is does it come out now to bury nvidia, or come out when nvidia announces 7nm Super Amperes.
Yes because Fury, Vega's, Radeon 7 and 5700XT all buried nvidia. Stands to reason that mini(big) navi will too...
Cool story bro but it's pretty damn clear nvidia sat on their ass this gen and your delicious tears won't change that all that much. I've bought team green for the last three gens, but the chances of that happening this time are p much zero.
@@TwoSevenX lol what
600watt psu on a 350TDP? With the new supposed 12pin? No thanks.
AMD Zeus overthrower of Titans?
248 bit? Not sure how that'll work :P
AMD 6900 XT will be a monster!! AMD should go all out with the flagship card! 80 cu’s, 16gb of ram, 512 bit bus and 896 of memory bandwidth.
Sever Sheen $2449 to $3999 for HBM memory version that would be fun!
;)
Nvidia and Jensen would be green from envy, because amd would have bals to be more expensive than Nvidia!
haukionkannel AMD doesn’t charge that much. That Nvidia’s thing.
Ppl begged them the same, to go all out with rdna1 as well but they didn't. Just put out 40cu cards that performed roughly 5% more than 2060 and 2070 respectively and called it a day.
Age_of_fire Well if AMD doesn’t bring a 16gb monster this November I’m sticking with my 5700 XT. I’ll be enjoying 1440p gameplay on Halo Infinite.
Age_of_fire I couldn’t agree with u more. U see all Nvidia gamers playing games like Fallout 4 and Witcher 3 on sli 2080 ‘Tis 8k maxed out graphics. Meanwhile the 5700 XT barely manages to hit 4K on more demanding titles.
never made a titan, not out to STEAL
Amd got this!
Sorry Pepe, nvidia is coming to slay the Amd normies again
Don't care how AMD does it whether with HBM or GDDR6/6X, Just give me something that is 50% faster than my 2080Ti and I'll buy it providing Nvidia doesn't beat it!
hmmm
MathS, Paul.
MathematicS=MathS
Math=Mathematic
Math is not a quantifiable noun...
@@kumbandit 'Math' is a shortened form of 'mathematic'. 'Maths' is a shortened form of 'mathematics'. You don't 'do the math(ematic)' - you 'do the maths'. You don't take 'math' class at school. This would imply the class itself is mathematic(al) in nature. The class is not mathematic(al). The subject that is being taught inside the classroom is mathematics.
X windows is linux graphics fronend
the audio level on this video is shite as you would say.
i want f acasi
Corona detected
intelamdnvidia fanboy detected
@Ryan Hills I had only AMD CPUs since November 2019 - 3600x and 3950x in main PC and also I have only AMD GPUs rn - in both PCs 5700xt, I had 2080ti but i sold that card like month ago because I know it will lose value soon
@@ryanhills2088 'intel amd nvidia' fanboy? You mean a computer fanboy?
@@NatrajChaturvedi just having a crack in it :)
That fan noise is getting worse.. First video where its been off putting rather than just background noise
Bro, 128 x 2 is 256... I've never heard of a 248 bit bus. ???? 348? Yea...
I wish the youtubers would stop wirh the speculation videos. Totally played out. Het new material, please.
All this talk about memory got me wondering why we have no standardized benchmarks for gpu bandwidth. I understand most people testing cards internally can't really open them up and look and what's on the pcb, but surely they can run a simple benchmark besides popular ones with arbitrary scales that tell us nothing. A simple google search brought up a couple:
github.com/ekondis/gpumembench
github.com/UoB-HPC/BabelStream
This the new camera?
80+ cu in and around 2ghz...smack jenson
00:58 That's 2048-bit, not 248-bit.
buzzing sound back again and very loud and distracting.
Using headphones? Sounds okay on my phone
@@faceplants2 yeah using headphones. The last few videos have almost been unbearable to watch. I love the videos but the constant buzzing makes it tough to sit through 20 minutes straight of that regardless of how good the video is lol..
@@dellecar5824 then dont listen oh headphones. Just another example of minority demanding that the majority bow to their whims
Sold my 2060 super today used for €435 using geforce now for the next month or so until 3000 series or rdna2
It's there for sure but nowhere near as distracting to me.
Fix the annoying noise. It's very distracting.
20000