Big Navi Using Hbm ??? And Gddr 6 ?? wtf??🤔

Поделиться
HTML-код
  • Опубликовано: 12 ноя 2024
  • НаукаНаука

Комментарии • 235

  • @marctech1996
    @marctech1996 4 года назад +7

    I am an Apple fan. And I can use the calculator better. A 50 percent improvement in efficiency doesn’t mean that you can get the same performance at half the power. That would equal a 100% improvement to efficiency. Instead the right math would be 225W times 0,66 for the same performance = around 150W. In order to achieve the speculated 2x increase in comparison to the 5700xt you would need around 300W (225W x 2 x 0,66).

    • @Zachard
      @Zachard 4 года назад

      You couldnt be more right...!! Double the performance with ~33% power increase that is... around 300W.

  • @morpheus_9
    @morpheus_9 4 года назад +1

    HOLY SHIT THE CAMERA LOOKS INSANELY DIFFERENT. The colors look more accurate, people say you look orange but it looks better.

  • @NeutronicalGaming
    @NeutronicalGaming 4 года назад +1

    Tbf i've been recommending 800W power supplies to anyone who asks for high end gaming rigs for ages now, and the reason being you need to account for heavy loads on multiple rails, the amperage available is not always ideal, unless you go overkill or go single rail fully modular ( which in my opinion should be the default but what do i know! ) plus, if you don't overload the psu you get a nice benefit of the fan not becoming a tornado.

  • @rhekman
    @rhekman 4 года назад +3

    FYI "Freedesk" is actually FreeDesktop.org, the unofficial standards body for all the plumbing behind the desktop environments on Linux. The organization publishes specifications, but also provide hosting for a lot of the projects that write software for Linux desktop like OpenGL & Vulkan drivers, Window system, audio session managers, network config, etc.

  • @908967
    @908967 4 года назад +8

    A split memory pool is just asking for problems. Look at Fury when that 4gb buffer was saturated performance suffered when it had to swap data from system ram/hdd. AMD had to patch into new drivers with every new dx9/dx11 game to prevent over saturation of that 4gb buffer. Also look at GTX 970 once that 3.5gb was satrurated and that 512mb of 24gb/s caused issues as well. Having a smaller HBM pool with GDDR6 as a multi level cache system will be hard to juggle especially with every new game that comes out.

    • @DrRachelRApe
      @DrRachelRApe 4 года назад +1

      Fury didn't have HBCC and Big Navi will have more than 4GBs. To the developer it would look like one big memory pool. I'm not saying they're going with a split memory system.

    • @908967
      @908967 4 года назад

      @@philippengl2342 Problem is the transition from the memory pools losing hundreds of mb/s will cause stuttering going from HBM to GDDR6 to DDR4 to grab data. Their going to have make sure to keep the GDDR6 ahead of the HBM

    • @908967
      @908967 4 года назад

      @@DrRachelRApe Fury did have HBM and the sudden drop in bandwidth having to grab data off of DDR3/4 caused performance drops and stuttering until AMD patched in a buffer profile for the game. Unless AMD has created a more automatic caching system into RDNA to combat the sudden drop in bandwidth going from HBM/HBC to GDDR6 from DDR4. Devs are going to have to keep it mind since alot more hardware coding is put onto them with DX12/Vulcan. Or AMD is going to have alot of work patching game profiles.

    • @VictorMistral
      @VictorMistral 4 года назад +1

      @@908967 Just to note : HBCC and HBM are two things...
      And the idea would be that the most important data would be in HBM. The less important but needed data in GDDR6 and "hopefully" nothing in system RAM. If GDDR6 is fast enough by itself to empower for a GPU, then HBM + GDDR6 would be faster in most case then GDDR6 alone, if handled well. (thus if memory is not in HBM, it gets it from GDDR6 and flag it has potential to move to HBM). A good memory design and a bit of flagging on developers side would make it perfect. And since the high speed SSD on next gen console can be used has "ram" (to texture are loaded from there) it would be surprising at all if developers already could flag ressource as more important to preload it. The requirements are done and tested, now the question is : is it worth having a more complex memory unit, with a bit more latency to detect the location of the memory, and potential issues with memory traversal and large texture, etc...
      And how to market that safely, 4Gb of HBM + 8Gb of GDDR6, for exemple would have the same capacity, probably has a pure 12Gb, due to wanting locality for predictability of access time on access of the next virtual memory address.
      I would love to see HBM + GDDR6, just to "play around" to try to understand how the memory controller works... I doubt it, especially for gaming, since I fear the added complexity would make it worth it.

  • @Relayer6a
    @Relayer6a 4 года назад +2

    A 50% performance improvement isn't the same going backwards. If you add 50% and want to get back to the original number you subtract 33%

  • @jasongooden917
    @jasongooden917 4 года назад +23

    Not an Apple fan...featuring Timmy Joe. Lol.. nice manly beard bro.

    • @revelation68kjv24
      @revelation68kjv24 4 года назад

      timmy joe makes videos about computers on the internet :) love his channel

  • @davidholomakoff4514
    @davidholomakoff4514 4 года назад +3

    In a podcast with Adored Tv and Tom at Moore's Law 5 months ago, Adored dropped some hints that the Big Navi architecture can utilize both memory types.

  • @SWANNwillSUFFICE
    @SWANNwillSUFFICE 4 года назад +8

    Camera is wobbly and the mic sounds like it's picking up more of your fan/coil whine than ever.

  • @elr77
    @elr77 4 года назад +2

    If true, it would make me EXTREMELY HAPPY and not sure I understand why people will be surprised. Also not sure why people would beileve that most people buying the flagship consumer grade GPUs actually care about power consumption. Or that HBM was so drastically more expensive than GDDR6. People also seem to forget that the Vega 10 and Vega 20 were all came with minimum of 8GB HBM and were intended to retail at MSRPs from $500 to $1500 for Vega 56 all the way up to Vage Frontier Edition. The Radeon VII retailed for $800 and was a step up from the Frontier Edition with a few more features. HBM production is ramping up now compared to when Vega first came out.
    There is nothing remotely odd about a SKU or 2 of Navi2X using HBM nor will it be any more expensive than what we have seen in the past in AMD fconsumer grade lagships. AMD have also not really rolled a technoogy back after presenting it in previous generations of GPUs. HBM came with the Fury GPUs and has persisted going from 4GB HBM in the Fury to now 16GB in the Radeon VII. Radeon VII was AMD's flagship for 2019 despite what people say about the RX 5700XT. In fact the Navi 10/RDNA 1 was such a failed architecture that it has never been marked as compatible with ROCm like all other AMD GPUs. Personally want to believe that AMD are going to double down even more with HBM and increase the VRAM from 16GB HBM to 32GB HBM on the RX 6900XT with the RX6900 getting 16GB. Fingers crossed!

  • @sugarsteps3164
    @sugarsteps3164 4 года назад +13

    That metallic drdrdrdrdr sound in the background(hope that made sense, not easy to describe) it sound like a bad fan or old HD working overtime. It gets a bit tiring to listen to after a while. Would be great if you could remove it somehow in your future videos.

    • @THEpicND
      @THEpicND 4 года назад

      Dislike the video so that he notices. Thats what I have been doing

  • @rhekman
    @rhekman 4 года назад

    Paul, your conclusions about cache & memory latency from Vega to Navi is correct. However I can see how you were confused looking for numbers to back that up. Both architectures had 16kb first level cache per CU, and 4MB L2 cache. Navi changed the L1 to an "L0" cache in each CU, do then added a shared 128Kb "L1" cache per dual compute unit as part of their multi-level cache redesign. They also doubled the cache bandwidth of the L0 cache to keep the instruction pipeline full to the beefier scalar in Navi.

  • @pete2097
    @pete2097 4 года назад +9

    I still think you need to start from the 52 cu of the Xbox series x as a base as that is (kinda) RDNA2 rather than 5700 xt.
    52 @ 1875Ghz is (guesstimate) 180 Watts....+50% on this is 270 Watts and 78cu....

  • @Goz2K
    @Goz2K 4 года назад

    Paul. Think you're a legend man ! Love the way you just tell it like it is ! My previous choice of buying a high end G-Sync monitor may just come back to bite me in the arse. If big navi can please all audiences and it can do 100Hz at 3440 by 1440 I'll be a happy man. Otherwise I'll just stick to my nvidia cards.

  • @tobiassteindl2308
    @tobiassteindl2308 4 года назад

    advantages of hbm:
    -Less energy consuption
    -Less space needed (both board and die)
    -Higher total bandwith possible
    -Higher Signal integrity due to lower speed and shorter wires
    disadvantages of hbm: cost

  • @ash98981
    @ash98981 4 года назад +14

    AMD has always built GPUs that ran efficient at the low end of the power curve- then they'd push the frequency up to run comparable to Nvidia's mid range- looks like with RDNA2 and 7nm they dont have to blow the power curve. Hmmmmm

    • @imadecoy.
      @imadecoy. 4 года назад +8

      Based on the rumors this time around it'll be Nvidia blowing out the power curve and trying to make a beefy enough cooler to keep it quiet.

    • @glenwaldrop8166
      @glenwaldrop8166 4 года назад

      Hopefully.
      AMD has a bad habit of late of pushing the lithography for all it's worth, voltage and clock. Voltage improves yields, so I see why they do that but it still makes the chips run hotter than they have any business doing. Back off a little bit, let overclocking be the reason the chips run 75*C, not stock speeds.

    • @ash98981
      @ash98981 4 года назад

      I've been undervolting AMD GPUs for a while and always got the feeling 7nm (smaller nodes)/the bottom of the power curve was what they've always been preparing for as the differentiator between Intel and Nvidia- I'm so far half right

    • @dra6o0n
      @dra6o0n 4 года назад +1

      @@ash98981 Nvidia cherry picks their dies with binning, hence why you see a selective group of fans always defending them that their cards are the best or works best... Then you have a group of those who gets the worst binning for 3rd party cards and says it's bad.
      AMD doesn't want to get involved with binning so much so they set the voltage standards to a level everyone must be at, so they all don't break or whine about the card not under or over performing on stock.
      This is why Nvidia founder's editions are always 'high end' performers, they are definitely binned like Intel Extreme CPU parts.

    • @hoangd4132
      @hoangd4132 4 года назад +2

      yeah remember the day the rx 480 was a 150w card (actually a little bit higher) but they push it so far that the rx 590 end up at a 225w card that only about 20% faster

  • @neutechevo
    @neutechevo 4 года назад

    you are right. The cache level was raised to 2x16 kb because the basic block now is a WGP (workgroup) consisting of 2CUs, thats why you calculate by 2X factor. So in each cycle now each workgroup can draw 2 waves of FP32 with Navi. Also double the L2 to 8Megs. I think they needed this to keep all CUs fed and not stalling waiting for other groups to finish processes.
    The logical is lower Sku's to have Gddr6 and top ones (2?) to go for HBM.
    Perf/watt is stated but it derives from multiple factors, process 7+ , nV is stuck on 7 for now, μArch advances design etc that brings IPC.. so its a tradeoff of many worlds to achieve targets.
    IPC 10%
    Process 10-15%
    And then you have scale, power and speed (Ghz).
    So finally a 225w navi 10 @ 1800Mhz with a die of 251mm2 7nm.. can it be now 450-500mm2 7+ 200-230w @ 2000+Mhz. Idk maybe. It seems plausible

  • @GraphicallyChallenged
    @GraphicallyChallenged 4 года назад +4

    Congrats on reaching 20K!

  • @redrock425
    @redrock425 4 года назад +13

    Both AMD and Nvidia are playing a very good game of chicken!
    Edit: New camera? You look a bit orange 😉

    • @Immortal_Maori21
      @Immortal_Maori21 4 года назад

      Hahaha. I thought that too.

    • @erikburzinski8248
      @erikburzinski8248 4 года назад +6

      He just acidently baught trump branded taning spray.

    • @miguelpereira9859
      @miguelpereira9859 4 года назад

      It's cuz he's orange

    • @gaijinkuri684
      @gaijinkuri684 4 года назад

      Facial hair is expanding

    • @BNOVA
      @BNOVA 4 года назад

      I actually thought he said a new camera was supposed to be delivered so maybe.

  • @rudlzavedno7279
    @rudlzavedno7279 4 года назад +3

    502 mm2 EUV 7nm die with 16GB HBM2 makes a lot of sense. Problem is the price. This GPU would probably cost around $1,100 if price of 313 mm2 die R7 is of any reference. Then again why not, Ngreedia's top dog will cost the same. I guess pricing is OK if AMD can beat it in raw performance.

    • @rudlzavedno7279
      @rudlzavedno7279 4 года назад +1

      @Ignatius David Partogi I don't think we know that info yet. N7+ has better yields (less errors) so is cheaper to produce. I'd imagine AMD would prefer it over N7P

    • @haukionkannel
      @haukionkannel 4 года назад +1

      Hopefully it will be much, much more expensive than $1100!
      That would make Jensen to make cartwheels! And seing leadher jacket to make cart wheels is worth of it ;)

    • @coolbeans6148
      @coolbeans6148 4 года назад +1

      R7 was on 7nm when yields were ABYSMAL. So not a great reference.
      Hbm2e isnt that much more than ddr6

  • @michaelschwader7944
    @michaelschwader7944 4 года назад +2

    If there are two versions with one using GDDR6 and the other HBM2, I'm going HBM2. The reason is the bandwidth.
    I have gone through the R9 Nano, Vega 64 and Radeon VII. The one thing I can say with certainty is that lag is virtually nonexistent. The VII may trade blows with the 2080, and more times than not the 2080 may beat it by about six percent when not overlooked, but there is a difference in the lag.
    Look at video comparisons. Don't just pay attention to the FPS but also the lag. That HBM makes a difference.
    Also, I'm not sure, but do the images tend to look better on AMD? It seems like Nvidia has some sort of weird image compression. Could be wrong, but I'm not sure.

    • @josealbertosantillan9732
      @josealbertosantillan9732 4 года назад

      your talking about input lag?

    • @TheSilviu8x
      @TheSilviu8x 4 года назад

      You are right, I constantly switch from AMD to Nvidia and out of the box the image quality of Nvidia cards is atrocious. Especially with freesync monitors.
      If you tinker with that primitive GeForce graphic interface driver, the image can be improved, actually really, really well, but then the speed become less...
      On my 34uc88b, I can't completely remove the screen tearing, with the GTX 1070ti, that was non existent on any of the Radeon cards.
      That lag that you are talking about I've noticed myself, but it goes away if you overclock the vram on the GeForce.

  • @BTAT2101
    @BTAT2101 4 года назад +3

    Congrats to 20k subscribers Paul! Greetings from Germany!

  • @fffforever
    @fffforever 4 года назад

    You can see how much calmer he is when the soda isn't there. His cognitive abilities are clearly impaired but his stress levels are greatly reduced.

  • @kennygeheim4230
    @kennygeheim4230 4 года назад +13

    best irish techtuber ever!

  • @pete2097
    @pete2097 4 года назад +3

    Here you go Paul, if you fancy a read re RDNA and all before it
    medium.com/high-tech-accessible/an-architectural-deep-dive-into-amds-terascale-gcn-rdna-gpu-architectures-c4a212d0eb9
    It's really interesting to see from VLIM to GCN to RDNA
    Interesting that Navi has L0 cache, lots and lots of optimisation went into getting all cu's working at all the time as you said changes to wave sizes to 32, and SIMD size to 32, so 2 waves can run per cycle rather than 1 wave every four cycle with vega.
    But RDNA also the ability to merge waves into 64 and run in backwards comp mode for well PS4 of course.
    I think RDNA 2 might have further changes to CU's but it will need to keep the 64 wave mode for PS4.

  • @Nitroniko1983
    @Nitroniko1983 4 года назад +1

    i dont care wich memory, just price/performance matters.

  • @ka96rol
    @ka96rol 4 года назад

    They said when they introduced Navi 10 that this architecture is compatible with both HBM and Gddr6, so yeah. Hbm will be for high end

  • @salehamini2036
    @salehamini2036 4 года назад +1

    Congrats for 20k subs.

  • @haroldasraz
    @haroldasraz 4 года назад +1

    In my region the difference between rx 5700 xt and rx 5700 (none xt) is 10 eur. Whilst 5700xt price starts at 400 eur. I think 10 eur is worth spending.

    • @chopcookies
      @chopcookies 4 года назад

      Nah dude better spend that 10 bucks on coffee to go🤣

  • @johnpaulbacon8320
    @johnpaulbacon8320 4 года назад

    So much we still don't know and won't know until the release.

  • @nexusiv1421
    @nexusiv1421 4 года назад

    AMD did try having a Titan like card ( not in performance ) but the $1499 Vega frontier edition liquid cooled is an example.

  • @PS3RatBag98
    @PS3RatBag98 4 года назад

    AMD's last 'Titan' was the 295x2

  • @gabumoh
    @gabumoh 4 года назад

    Why is Obi Wan Kenobi talking to my about computer hardware?...

  • @kleanthisgroutides7100
    @kleanthisgroutides7100 4 года назад

    If power is a concern then HBM doesn’t fit your use case... even Apple has been stupid by integrating HBM into a laptop regardless of the fact that HBM is at the opposite end of the spectrum when your top priorities are power, complexity and cost... I’m seriously dumbfounded at the engineering choices companies are making, looks like marketing have taken over.
    GDDRxx hits all of the above, you can integrate them onto an interposer as Samsung makes a variant that is exactly this.

  • @adrianoprimero3590
    @adrianoprimero3590 4 года назад +3

    Nobody can afford gpu's prices anymore, we don't care.

    • @Carnyzzle
      @Carnyzzle 4 года назад

      I'm gonna have to end up buying a 4700G LOL

    • @Taintedmind
      @Taintedmind 4 года назад

      People that make good life decisions can just fine

    • @Carnyzzle
      @Carnyzzle 4 года назад

      @@Taintedmind
      That kind of logic is why companies love taking advantage of the general public LOL

    • @Taintedmind
      @Taintedmind 4 года назад

      @@Carnyzzle
      More like lack of competition on the top end, while colluding to artificially increase them together; companies have already been caught doing as such

    • @scotchwhisky6094
      @scotchwhisky6094 4 года назад

      Where are you working? My mate works in a warehouse and can still afford these gpu prices.

  • @satchfanuk
    @satchfanuk 4 года назад +5

    2048 Paul, not 248 :P

    • @PS3RatBag98
      @PS3RatBag98 4 года назад

      Ohhhhh

    • @MrArchy1986
      @MrArchy1986 4 года назад

      For sec i thought he is having a stroke. Started to mumble some nonsense:D

  • @GTAguidesIta
    @GTAguidesIta 4 года назад

    i do not want to be tedious but your math at 11:20 is wrong. Let's make a point :
    if the 5700xt has a value of performance '100' at 225w +50% more eff./watt means that has '150' performance with 225w , and if we assume a best case scenario where power scales linearly with performance (which is wrong because it doesn't scale linearly , more often you need more power) , we will have at best the same performance of an 5700xt with 2/3 the power ,which is 150w.
    So is not that good like you said , with 112w for the same performance... But it's still very good :)

    • @dubya85
      @dubya85 4 года назад +1

      Its not being tedious, its being accurate. That is important.

  • @HAHA.GoodMeme
    @HAHA.GoodMeme 4 года назад +13

    120CU big Navi with 32GB HBM2 for $1500 I'd buy it

    • @rudlzavedno7279
      @rudlzavedno7279 4 года назад +3

      It's not gonna happen. Such GPU would draw 500-600W of power. Even prosumer GPUs target "only" 300-400W envelope.

    • @johnkeane320
      @johnkeane320 4 года назад

      Who wouldn't

    • @johnkeane320
      @johnkeane320 4 года назад +2

      That would be like $2500-2800

    • @haukionkannel
      @haukionkannel 4 года назад +2

      Hopefully it would cost more, much much more!
      Jensen would be envy for the price point ;)

    •  4 года назад +1

      @JustVictor 17 we know nothing about CDNA, but I expect many more "shallow" CUs than in RDNA/2. It will focus on computing power and it makes sense to write application for specific hardware in computing applications. So it won't face generalization problem of game engines.

  • @GegoXaren
    @GegoXaren 4 года назад

    "foon"
    That is just adorable.

  • @gregpeck8
    @gregpeck8 4 года назад

    Have a i7 9700f with rtx 2070 and 600w psu. And only get up to around 500 watts. You're saying you think I can keep my 600w psu with big navi? I'm for sure going with big navi but always thought I'd have to get a bigger psu.

  • @tukiso_kev_malepe
    @tukiso_kev_malepe 4 года назад

    That 20k looks good..... Congrats again.

  • @garyslatter9854
    @garyslatter9854 4 года назад

    Its only price/performance that matters

  • @PreacherwithoutaPulpit
    @PreacherwithoutaPulpit 4 года назад +1

    Tin Foil Hat time:
    I wonder if NVidia saw the writing on the wall way back when Polaris launched? They saw AMD's node progression and knew that Navi was in the pipe. So they followed up the GTX10 series with Ray Tracing enabled cards that couldn't really do more than give a glimpse of what was to come. They did that with the 20 series of cards and we all know unless you had the absolute top end cards either the 2080ti or Titan RTX itself you really couldn't use Ray Tracing without taking a massive performance hit making your games a stuttering mess. The two top of the line cards could do it with a massive performance hit but they had enough raw grunt to get it done at playable frame rates of a consistent 60+ frames. They did this with the hopes of shifting the narrative away from raw performance onto "Features" because they knew Navi was going to take the outright performance crown eventually. Nvidia being Nvidia knew that with their market share they had the ability to sell these new Features to the masses as something they couldn't possibly game without even though at launch the performance was abysmal. They've since gotten it mostly ironed out and have found out exactly what it's going to take hardware wise on the RTX side of things. AMD in the meantime will be launching what I believe will actually be the fastest graphics cards on the market but fall behind in Ray Tracing and there's the rub. Even though AMD's going to be demonstrably faster at gaming Nvidia's going to use the Features Narrative to bludgeon them over the head yet again...

    • @shehrozkhan9563
      @shehrozkhan9563 4 года назад

      I don't think Nvidia would have anticipated that early that Navi would be this good.
      I do agree with most of what you've said tho. I think AMD might win in standard rasterized performance but they'll lose in RT and DLSS alternatives.

    • @dubya85
      @dubya85 4 года назад +3

      The delusion is real. Amd has been beaten solidly for 5+ years now. And nvidia are worried because big navi? This isnt ryzen versus intel cpus. Amd is going to get destroyed again.

  • @Koozwad
    @Koozwad 4 года назад +1

    If AMD thinks they will win I can easily see them using HBM simply because -they can-. When they are selling the top GPU(s) they can automatically charge more and this gives them room to add HBM which will secure their sales even more. Now I probably could have worded that better but TLDR; HBM in AMD GPUs = yes, if they are feeling confident. Top card = high price with room for some improved memory.

  • @wakesake
    @wakesake 4 года назад +1

    remember Radeon Pro SSG..

  • @dra6o0n
    @dra6o0n 4 года назад

    Well, it will be cheaper so it can be expensive then?
    Without HBM and GDDR6, it's viable to make Big Navi fast but cheaper at a ridiculous price if you focus on performance in other form factors, but adding those in and you ramp up prices to Nvidia levels, just to show off?

  • @dra6o0n
    @dra6o0n 4 года назад

    We still don't see MCM being practical yet for a couple of reasons, one being that they are trying to sell us monolithic dies at a decent pricing right now, so there isn't a huge urge to shove us all onto MCM products. If they can scale to two dies, then they can scale to 4, 8, 16, etc.
    This would mean that their products would EASILY, and instantly be rendered obsolete in less than one generation if they push for them too fast.
    Like say if efficiency per scaling is the exact same or better, selling a GPU with 2 dies at a cost less than 2x of a monolithic GPU, will be a huge loss in profit in their investor's point of view.
    Why sell two dies in a GPU when you can sell two GPUs? Basically.
    If you wanna see MCM rushed out and polished a lot, you need huge amount of competition to force people to upgrade for cheap, or a huge amount of software that 'push' people to upgrade to be 'compatible'. Either way, it's pro-consumer, or pro-corporate.

  • @FalseInformation
    @FalseInformation 4 года назад

    Hbm.. so it's gonna make it thousand times the cost then just like the vega cards..

  • @smilingguy6480
    @smilingguy6480 4 года назад

    Whenever I watch these videos I crave Lucky charms they’re magically delicious

  • @dainluke
    @dainluke 4 года назад

    40 CU/256 bit bus/8GB GDDR6 ($300)
    64 CU/384 bit bus/12GB GDDR6 ($450)
    72 CU/512 bit bus/16GB GDDR6 ($650)
    80 CU/2048 bit bus/16GB HBM2E ($900)
    That's my prediction.

    • @shehrozkhan9563
      @shehrozkhan9563 4 года назад

      Your prices are WAYYYYY TOO low. No way amd will be giving you a 40 cu card with 12 gb ram for 300. Not happening anytime soon

    • @dainluke
      @dainluke 4 года назад

      @@shehrozkhan9563 AMD are though... They're refreshing the 5700 XT as the mid range card for RDNA 2 at a reduced price. Paul has mentioned this numerous times as well. It also makes complete sense because it should have always been 300 XD. It launched at 450 because it was beating the 2070 before SUPER was launched. It got dropped to 400 to win over consumers. You're telling me they wouldn't drop the 5700 XT to 300 under the RDNA 2 refresh to compete with the 3060? I will say, I meant to list it as 8GB of VRAM 😂.

    • @bobpro583
      @bobpro583 4 года назад

      ​@@shehrozkhan9563 No the prices are just right dude lol what r u smoking.

    • @shehrozkhan9563
      @shehrozkhan9563 4 года назад +1

      @@dainluke Okay now it makes more sense. A 5700 xt refresh with 8 gb ram for 300, I can see that. Although I think their highest end navi will be like $1000.
      Also yeah I do agree, the 5700 series SHOULD HAVE been Polaris replacements, not Vegas.

    • @shehrozkhan9563
      @shehrozkhan9563 4 года назад

      @@bobpro583 elaborate pls

  • @cracklingice
    @cracklingice 4 года назад

    295x2 is arguably their first titan

  • @BrunoAlmeidaAkaRonin
    @BrunoAlmeidaAkaRonin 4 года назад

    The "killer" will have HBM2E as ammunition.
    ƾ

  • @DrGrosDoigt
    @DrGrosDoigt 4 года назад

    Poor PSU, glowing red and sh!7.

  • @user-tb8jj8nn7t
    @user-tb8jj8nn7t 4 года назад

    hate the mubojumbo, better times,cheers!

  • @johnkeane320
    @johnkeane320 4 года назад +3

    If only AMD cards worked with my g sync monitor 😭😭😭

    • @d.oconnor4047
      @d.oconnor4047 4 года назад

      My thinking exactly 👍

    • @johnkeane320
      @johnkeane320 4 года назад

      I have the predator x27p and I cant get a solid answer I'd it supports free sync. I know it has vrr over hdmi.

    • @johnkeane320
      @johnkeane320 4 года назад

      @@Android-ng1wn ye the lg OLED support both as they are free sync supported and g sync compatible but indont think my monitor supports free sync only g sync. I'd swap to and in a heartbeat if it supported freesync aswell

    • @samuelkdu
      @samuelkdu 4 года назад

      if you go over your monitors refresh rate gsync and whatever shit amd uses becomes useless i just use fast sync

    • @johnkeane320
      @johnkeane320 4 года назад

      @@samuelkdu ye thanks, I'm aware of that. I use g sync on, v sync on in control panel, and RTSS to cap frame rate to a rate within the g sync range. So for me g sync (or free sync) is essential.

  • @pete2097
    @pete2097 4 года назад +5

    Am betting HBM version is for Apple....

    • @No-One.321
      @No-One.321 4 года назад

      That card has already detailed tho. Which shows that rdna can use both types of memory

    • @pete2097
      @pete2097 4 года назад

      @@No-One.321 You talking about Apple Radeon Pro 5600M ? Am saying that there's a 6600M with HBM for Apple. But doesn't indicate that PC will get HBM...

    • @No-One.321
      @No-One.321 4 года назад

      @@pete2097 what do you think apple is? So rdna using both gddr6 and hbm doesn't mean that PC can't get hbm? Only someone who is an nvidia fanboy would come to that conclusion with that logic. Because a normal unbiased person would think ok so the 5000 series uses gddr6 and apple is getting an rdna with hbm so it means it's very likely that in a higher end gpu would be the hbm memory at least.

    • @pete2097
      @pete2097 4 года назад +2

      @@No-One.321 Ha no i just think that RDNA2 might not need hbm, and i've a 5700 xt, so save the fan boy comments. That just makes you look silly.
      Nvidia doesn't need HBM, so why should AMD go with it? It's expensive for what it is.
      If you recall there are a few lite versions of rdna2 and I would say one will be apple with HBM...

    • @No-One.321
      @No-One.321 4 года назад

      @@pete2097 the new apple is rdna not rdna2

  • @rcrhino2148
    @rcrhino2148 4 года назад

    Reading with PAUL!

  • @DeadphishyEP3
    @DeadphishyEP3 4 года назад +1

    I am sure its been said. But your 50% math is incorrect. You have to start with the lower number not the final number. So.... so if navi 10 used 225watts. Nav 2 at same performance will use 150watts. 150w x 150% is 225w.

  • @rajurowe189
    @rajurowe189 4 года назад +1

    Dont get your hopes up too high, it could be all smoke and mirrors and we are more likely to call it Little Navi hahahah GO NVIDIA

    • @dubya85
      @dubya85 4 года назад +1

      Mini Navi!

  • @user-tb8jj8nn7t
    @user-tb8jj8nn7t 4 года назад

    i tink jenson would make more from the100 if he gave it to us for 1.5 - 2g

    • @imadecoy.
      @imadecoy. 4 года назад +1

      Doubt it. The yields are probably disgustingly bad on a die that big and datacenters will probably buy more than gamers would.

  • @candoslayer
    @candoslayer 4 года назад

    nvidia worked with microsoft to get ray tracing in direct x so that is the version that we will see in amd cards

    • @kriszhao80
      @kriszhao80 4 года назад

      Dxr is microsofts version I believe?

    • @candoslayer
      @candoslayer 4 года назад +1

      @@kriszhao80 yea nvidia worked with them to develop it. If nvidia didn't get it in direct x it would not be widely adopted.

    • @SerBallister
      @SerBallister 4 года назад

      @@candoslayer RT is working in vulkan now too

  • @jonothonlaycock5456
    @jonothonlaycock5456 4 года назад

    Paul been on the Beers? ;-)

  • @shocka007
    @shocka007 4 года назад

    Bandwidth is the weakest link so the closer the wider the better memory and if it's directly accessable by GPU's and CPU's in parallel .. fuck .

  • @johnluuee
    @johnluuee 4 года назад

    69! 69! 69!
    The 69 series with HBM2e, I bet it’s sells at a1000 dollar

  • @petar5151
    @petar5151 4 года назад +1

    When will you give up Paul? It doesn't matter anymore.

    • @imadecoy.
      @imadecoy. 4 года назад +1

      Profile pic checks out?

    • @kumbandit
      @kumbandit 4 года назад

      You're obviously unbiased 😂

  • @NatrajChaturvedi
    @NatrajChaturvedi 4 года назад +5

    Lol you done this same video how many times now?

    • @dubya85
      @dubya85 4 года назад

      If they keep saying it enough, it might become true!

  • @kusumayogi7956
    @kusumayogi7956 4 года назад

    Since Navi 10 can use hbm 2 and AMD call it navi 12
    I believe AMD can do what ever they want to do.

  • @TrueThanny
    @TrueThanny 4 года назад

    You'd need to run at 2.44GHz to get to 20TFLOPs with just 64 CU's. That's not realistic. There probably will be a 64CU card, but it won't be the biggest, and it won't be running at >2.4GHz.

  • @MrAdhiSuryana
    @MrAdhiSuryana 4 года назад

    I know someone need to talk about this 🤣

  • @Lastguytom
    @Lastguytom 4 года назад

    AMD is keep tight on the info on big navi!!

  • @user-tb8jj8nn7t
    @user-tb8jj8nn7t 4 года назад

    5-6 i dont need hand outs

  • @denvera1g1
    @denvera1g1 4 года назад

    My 5700xt at 97w TBP runs around 12-1300mhz. In furmark (960x540) this is a difference between 240fps and 185FPS, but also basically puts my fans at idle(768RPM), and temps down below 60c and junction temp under 70c Now obviously, this is using half the power, but still ~70% of the original performance
    900p furmark is seeing 130fps for 200w, and only drops down to 95FPS at 101w, you can double the power draw, for 30% more performance, or should you double the CU count? AMD bet on a smaller die, because that didnt cost anything. and at the higher power draw, it was competeing with 2080.
    i could see an 80 CU 5900xt, pulling only 220w at 1400mhz, it would likely have been more powerful than a 2080TI, not in every game, because frequency still matters in some games.
    Like i have been saying sinse the beginning, the 5700xt, was meant to compete with the 1660(mostly because of die size), but then they realised it overclocked well, plopped more RAM and better cooling, and it, for a time, competed and sometimes beat the 2080., AMD even said they werent going to produce a high end card, they surely had test samples, but figured the 5700xt would be enough, because it was beating nvidia's 2nd best, at a cost to them closer to nvidia's middle road cards, even with the reletively high end RAM, VRM, and cooling.

  • @jasongooden917
    @jasongooden917 4 года назад

    If it has HBM.. my gosh that would be crazy

    • @AnotherAnonymousFag
      @AnotherAnonymousFag 4 года назад +2

      Yes, because Radeon have totally never done that before...

    • @desjardinsfamily5769
      @desjardinsfamily5769 4 года назад

      I have a vega56 and a radon vii in my tower right now... I NEEEED more HBM. I'd buy it in a second....

    • @AnotherAnonymousFag
      @AnotherAnonymousFag 4 года назад

      @@desjardinsfamily5769 Dont forget to buy the liquid nitrogen ;)

    • @desjardinsfamily5769
      @desjardinsfamily5769 4 года назад

      @@AnotherAnonymousFag I threw the side panel in the trash... 😂

  • @rodturner6759
    @rodturner6759 4 года назад

    Hey my Man, could you please reposition your Mic, or something...PLEEEEEEEEZZZZZZ! :)

  • @user-tb8jj8nn7t
    @user-tb8jj8nn7t 4 года назад

    thats the problem, give it to me

  • @JesusGarcia-Digem
    @JesusGarcia-Digem 4 года назад +1

    Ive been saying this for a while, HBM2E!!!!

    • @malathomas6141
      @malathomas6141 4 года назад +1

      What is it like a more efficient hbm2?

    • @JesusGarcia-Digem
      @JesusGarcia-Digem 4 года назад +1

      @@malathomas6141 www.tomshardware.com/reviews/glossary-hbm-hbm2-high-bandwidth-memory-definition,5889.html What Are HBM, HBM2 and HBM2E? A Basic Definition

  • @foxfff123123
    @foxfff123123 4 года назад

    Lisa Su edition🤣🤣🤣🤣. Fanboys line up!

  • @mrmedium7984
    @mrmedium7984 4 года назад

    Thy hyper class card should use hbm while the extreme enthusiast and lower should be gddr6

  • @TwoSevenX
    @TwoSevenX 4 года назад

    The chances there isn't a Halo Navi is pretty slim. 64-80CUs 1024x2bus, 16 gigs of 2E. Really the only question is does it come out now to bury nvidia, or come out when nvidia announces 7nm Super Amperes.

    • @dubya85
      @dubya85 4 года назад

      Yes because Fury, Vega's, Radeon 7 and 5700XT all buried nvidia. Stands to reason that mini(big) navi will too...

    • @TwoSevenX
      @TwoSevenX 4 года назад

      Cool story bro but it's pretty damn clear nvidia sat on their ass this gen and your delicious tears won't change that all that much. I've bought team green for the last three gens, but the chances of that happening this time are p much zero.

    • @dubya85
      @dubya85 4 года назад

      @@TwoSevenX lol what

  • @adrenlynl7339
    @adrenlynl7339 4 года назад

    600watt psu on a 350TDP? With the new supposed 12pin? No thanks.

  • @paulbestwick2426
    @paulbestwick2426 4 года назад

    AMD Zeus overthrower of Titans?

  • @Scisca1a2a
    @Scisca1a2a 4 года назад

    248 bit? Not sure how that'll work :P

  • @seversheen3044
    @seversheen3044 4 года назад

    AMD 6900 XT will be a monster!! AMD should go all out with the flagship card! 80 cu’s, 16gb of ram, 512 bit bus and 896 of memory bandwidth.

    • @haukionkannel
      @haukionkannel 4 года назад

      Sever Sheen $2449 to $3999 for HBM memory version that would be fun!
      ;)
      Nvidia and Jensen would be green from envy, because amd would have bals to be more expensive than Nvidia!

    • @seversheen3044
      @seversheen3044 4 года назад

      haukionkannel AMD doesn’t charge that much. That Nvidia’s thing.

    • @NatrajChaturvedi
      @NatrajChaturvedi 4 года назад +2

      Ppl begged them the same, to go all out with rdna1 as well but they didn't. Just put out 40cu cards that performed roughly 5% more than 2060 and 2070 respectively and called it a day.

    • @seversheen3044
      @seversheen3044 4 года назад

      Age_of_fire Well if AMD doesn’t bring a 16gb monster this November I’m sticking with my 5700 XT. I’ll be enjoying 1440p gameplay on Halo Infinite.

    • @seversheen3044
      @seversheen3044 4 года назад

      Age_of_fire I couldn’t agree with u more. U see all Nvidia gamers playing games like Fallout 4 and Witcher 3 on sli 2080 ‘Tis 8k maxed out graphics. Meanwhile the 5700 XT barely manages to hit 4K on more demanding titles.

  • @user-tb8jj8nn7t
    @user-tb8jj8nn7t 4 года назад

    never made a titan, not out to STEAL

  • @sleepagedog4242
    @sleepagedog4242 4 года назад

    Amd got this!

    • @dubya85
      @dubya85 4 года назад

      Sorry Pepe, nvidia is coming to slay the Amd normies again

  • @ShaneMcGrath.
    @ShaneMcGrath. 4 года назад

    Don't care how AMD does it whether with HBM or GDDR6/6X, Just give me something that is 50% faster than my 2080Ti and I'll buy it providing Nvidia doesn't beat it!

  • @user-tb8jj8nn7t
    @user-tb8jj8nn7t 4 года назад

    hmmm

  • @Koozwad
    @Koozwad 4 года назад

    MathS, Paul.
    MathematicS=MathS
    Math=Mathematic

    • @kumbandit
      @kumbandit 4 года назад

      Math is not a quantifiable noun...

    • @Koozwad
      @Koozwad 4 года назад

      @@kumbandit 'Math' is a shortened form of 'mathematic'. 'Maths' is a shortened form of 'mathematics'. You don't 'do the math(ematic)' - you 'do the maths'. You don't take 'math' class at school. This would imply the class itself is mathematic(al) in nature. The class is not mathematic(al). The subject that is being taught inside the classroom is mathematics.

  • @Krtkodlaq
    @Krtkodlaq 4 года назад +1

    X windows is linux graphics fronend

  • @NAWAF-vc1px
    @NAWAF-vc1px 4 года назад

    the audio level on this video is shite as you would say.

  • @user-tb8jj8nn7t
    @user-tb8jj8nn7t 4 года назад

    i want f acasi

  • @evilsatorii
    @evilsatorii 4 года назад +1

    Corona detected

    • @ryanhills2088
      @ryanhills2088 4 года назад +1

      intelamdnvidia fanboy detected

    • @evilsatorii
      @evilsatorii 4 года назад +3

      @Ryan Hills I had only AMD CPUs since November 2019 - 3600x and 3950x in main PC and also I have only AMD GPUs rn - in both PCs 5700xt, I had 2080ti but i sold that card like month ago because I know it will lose value soon

    • @NatrajChaturvedi
      @NatrajChaturvedi 4 года назад +3

      @@ryanhills2088 'intel amd nvidia' fanboy? You mean a computer fanboy?

    • @ryanhills2088
      @ryanhills2088 4 года назад

      @@NatrajChaturvedi just having a crack in it :)

  • @danbeaman1115
    @danbeaman1115 4 года назад

    That fan noise is getting worse.. First video where its been off putting rather than just background noise

  • @niteowl9733
    @niteowl9733 4 года назад +1

    Bro, 128 x 2 is 256... I've never heard of a 248 bit bus. ???? 348? Yea...
    I wish the youtubers would stop wirh the speculation videos. Totally played out. Het new material, please.

  • @Ironclad17
    @Ironclad17 4 года назад +1

    All this talk about memory got me wondering why we have no standardized benchmarks for gpu bandwidth. I understand most people testing cards internally can't really open them up and look and what's on the pcb, but surely they can run a simple benchmark besides popular ones with arbitrary scales that tell us nothing. A simple google search brought up a couple:
    github.com/ekondis/gpumembench
    github.com/UoB-HPC/BabelStream
    This the new camera?

  • @user-tb8jj8nn7t
    @user-tb8jj8nn7t 4 года назад

    80+ cu in and around 2ghz...smack jenson

  • @TrueThanny
    @TrueThanny 4 года назад

    00:58 That's 2048-bit, not 248-bit.

  • @dellecar5824
    @dellecar5824 4 года назад +3

    buzzing sound back again and very loud and distracting.

    • @faceplants2
      @faceplants2 4 года назад +1

      Using headphones? Sounds okay on my phone

    • @dellecar5824
      @dellecar5824 4 года назад +1

      @@faceplants2 yeah using headphones. The last few videos have almost been unbearable to watch. I love the videos but the constant buzzing makes it tough to sit through 20 minutes straight of that regardless of how good the video is lol..

    • @richardscarlett7942
      @richardscarlett7942 4 года назад +1

      @@dellecar5824 then dont listen oh headphones. Just another example of minority demanding that the majority bow to their whims

    • @johnkeane320
      @johnkeane320 4 года назад

      Sold my 2060 super today used for €435 using geforce now for the next month or so until 3000 series or rdna2

    • @NatrajChaturvedi
      @NatrajChaturvedi 4 года назад +1

      It's there for sure but nowhere near as distracting to me.

  • @tedtrager9560
    @tedtrager9560 4 года назад

    Fix the annoying noise. It's very distracting.

  • @TheSilviu8x
    @TheSilviu8x 4 года назад

    20000