Neon Noir Benchmark: Ray Tracing With The Oldies

Поделиться
HTML-код
  • Опубликовано: 6 сен 2024
  • Check out the FSP CMT 520 Plus Case here:
    amzn.to/2MMTDQi
    Cryengine Neon Noir Benchmark:
    www.cryengine....
    Digital Foundry Neon Noir video:
    • Neon Noir: Crytek's So...
    Want to come hang out with us an talk new and retro tech?
    Join the Pixel Talk Discord server! / discord
    Thanks for taking the time to watch this video. If you like this type of content, then please consider subscribing to my channel.
    Please help support the channel by using and bookmarking my affiliate links below. Every little helps, as it enables me buy new and used hardware to test on the channel.
    **Amazon Affiliate Links **
    (United States): amzn.to/2CvxXRd
    (United Kingdom): amzn.to/2m1rBCT
    **Sign up for Humble Bundle Monthly! $12 a month for Amazing games! ** www.humblebund....
    Follow me on Twitter: / f2ftech1
    Contact me on Twitter or here in the comments if you have any questions about the video, testing methodology, or the products used.
    Production Music courtesy of Epidemic Sound: www.epidemicsou...

Комментарии • 131

  • @F2FTech
    @F2FTech  4 года назад +7

    Made a mistake on the drivers used - NVIDIA Cards: 391.35 & 441.12 - AMD Cards: Catalyst 15.7 & Adrenalin 19.11.2 - Intel: 26.20.100.7262

  • @Ivan-pr7ku
    @Ivan-pr7ku 4 года назад +62

    Terrascale architectures are essentially void of HW scheduling for the instruction dispatch. As any pure VLIW design, it all falls to the JIT compiler to optimize the code before execution and even then it can't do anything to avoid run-time hazards (and ray-tracing has plenty of them).
    On top of that those old AMD GPUs don't even have proper read/write L2 cache (only a small color/depth cache in the ROPs), so global communication between the compute kernels is done through the slow video memory -- something that definitely bottlenecks RT.

  • @sophustranquillitastv4468
    @sophustranquillitastv4468 4 года назад +26

    Terascale cards are pretty obsoleted in 2019. They can play almost nothing now.

  • @Erksah68
    @Erksah68 4 года назад +13

    GTX480 is more efficient at generating heat than my heater.

  • @haramaschabrasir8662
    @haramaschabrasir8662 4 года назад +7

    Starting from 6:37 you can see there's something odd with the 780 textures. Look at the crumbled newspaper on the road or the light on the pole for instance. The benchmark seems to switch the textures automatically, so you can't just compare the fps on the cards, you have to judge by the look of the game.

    • @F2FTech
      @F2FTech  4 года назад +2

      Yeah that’s a valid point. Not having enough VRAM causes this swap issue. The UHD 630 has the hardest time.

  • @DanielCardei
    @DanielCardei 4 года назад +11

    Omg fantastic video!
    3:37 i cried a little when you show the 580 csf

  • @OTechnology
    @OTechnology 4 года назад +17

    Dang UHD 630 is toying with the old big boy gpus...

    • @foch3
      @foch3 4 года назад

      It's IQ isn't even close though.

    • @300maze
      @300maze 4 года назад

      in a test that crushes the terascale vliw design sure (and this gpus didnt get any drivers for years)

    • @GewelReal
      @GewelReal 4 года назад

      While using less than 10% of the power as well!

  • @mechanicalpants
    @mechanicalpants 4 года назад +17

    Cool, thanks for these benchmarks. I wonder if we will see a software solution become the future of ray tracing in games? After all the hype of Nvidias newest RTX GPUs that would certainly be an interesting development.

    • @KARAOTI23
      @KARAOTI23 4 года назад

      AMD though went and implemented hardware accelerated RT in their new upcoming lineup of gpu's so I wouldn't be too optimistic about that.

  • @0Wayland
    @0Wayland 4 года назад +8

    Very cool idea, really enjoyed these GPU battles!

  • @LegalEliminator
    @LegalEliminator 4 года назад +63

    i think it's absolutely incredible how this benchmark has revealed just how much bullshit Nvidia has been spewing about ray tracing. my 980ti get's 60 fps easily with 1080p ultra on the benchmark and well over 80 on high settings. i get that this is using basically just reflections but it makes me optimistic about the future of older GPUs and new graphics technology.

    • @murphy7801
      @murphy7801 4 года назад +13

      Ah actually no it isn't. Nvidia advertised feature is hardware based Ray tracing which is much more effective. We been able to do ray tracing in software since the 90s. Hell borderlands 2 laser sights where Ray tracing thats no modern game.
      For example at 1080p an rtx 2060 can do this benchmark and get nearly 10k. That's large leap. But has dedicated hardware.

    • @LegalEliminator
      @LegalEliminator 4 года назад +3

      @@murphy7801 gr8 b8 m8 I r8 8/8

    • @murphy7801
      @murphy7801 4 года назад +4

      @@LegalEliminator maybe get a degree in computer science and come back to me

    • @LegalEliminator
      @LegalEliminator 4 года назад +2

      @@murphy7801 well if you are being serious let me put it this way. like you said, ray tracing has been used for computer animation basically since the beginning. i agree it's not a new technology, but it's pretty much exclusively been used in production work for CG in movie, TV and other media. sure, parts of ray-tracing have been used in real-time rendering like games, such as light bounces off surfaces, but no where near to the same extent. only recently has full ray-tracing even been possible on consumer grade gaming hardware. looking at the development of the software that drives these technologies it's pretty clear that infact it wasnt the hardware that was holding us back (at least to some extent), it was actually the software.
      my point is that Nvidia has been doubling and tripling down that currently only RTX cards are capable of running real-time ray tracing in a way that can be used in games. they even pulled that stunt where RTX was enabled on 10 series cards. of course the RTX optimised stuff is going to run like shit on stuff that doesnt have RT cores. it's like trying to edit 8k footage on a pentium. sure it's technically possible but it's going to run like shit because it wasnt designed for that kind of workload. however, since the introduction of DXR by microsoft and now crytek's ray tracing software, it's become very apparent that there's no real need for dedicated hardware to implement these new technologies. the video clearly showed this where decade old hardware was able to run the demo at 30+ fps in some instances. even before the demo was released to the public ray-tracing was enabled for the RX5700 series through mods in minecraft where it runs extremely well compared to the Nvidia counterparts.
      as it currently stands RTX is a gimmick at best that takes away 40% of the cards potential fps. the final nail in the coffin will most likely be in the new consoles as both of them are said to have ray-tracing capabilities, which is being developed in conjunction with AMD. the developers will shove most of their development time into those versions because they out-sell all of the PC hardware sales many times over. sure Nvidia is going to stick to their guns with RTX, and their next-gen implementation may even be incredible, but hardware agnostic RT like the stuff used in the crytek demo will just ensure that RTX is only necessary if your the 1% of gamers who want it for the gloating factor.

    • @murphy7801
      @murphy7801 4 года назад +4

      @@LegalEliminator OK but are aware that talk final nail on coffin actually talking about industry acceptance. Like when sharers became a thing. So your argument that consoles and AMD implementing ray tracing as well saying that nvidia got the entire industry to move to its beat.... Impressive. Now in regards to nerfing performance. No if like the gpu architecture wouldn't been able to pump more cuda cores on. That's wouldn't work exactly like that. You would got about a 10% lift at best for how much die space is used.
      So again nvidia released a hardware feature that is now becoming mainstream ahead time and there cards will have a longer life span because of it like AMD did when introduced more dx12 features in gcn architecture...

  • @FullyBuffered
    @FullyBuffered 4 года назад +4

    Nice work Mike! I've quite enjoyed putting some cards through their paces on this benchmark, but it's great to see them all stacked together here. As usual the mighty 3GB Fermi is still doing great compared to Terascale, but I'm also surprised by how the 780 did compared to the 390. The 7790 is also quite an interesting case considering its low 1GB VRAM and how it was able to tie with the 580.

  • @bluemarble4051
    @bluemarble4051 4 года назад +3

    amazing comparison, love those classified & matrix cards.

  • @nutzeeer
    @nutzeeer 4 года назад +4

    the ram usage is higher on cards with lower VRAM

  • @lagginswag
    @lagginswag 4 года назад +3

    the 3 power plugs on the 580 classified though 🥴

    • @CompatibilityMadness
      @CompatibilityMadness 4 года назад +1

      When Fermi goes LN2... you need all the 8-pins you can fit ;)

    • @jakegarrett8109
      @jakegarrett8109 4 года назад +1

      Yes this is an LN2 designed card by K1NGP1N (one of his first developed with Tim), it’s for unleashed no limits (and still with the modern Nvidia gimping power limits to stupid low, all the K1NGP1N edition cards manage to “leak” unlocked BIOS versions, cough, by accident of course since that’s against the contract between Nvidia and it’s distributors. Accident of course!). This was designed for benching all the world records and the last thing you want is a VRM melting or insufficient for 1000+w on the core, this was designed to unleash the Nvidia silicon to destruction for records). Also those cards live on even when the silicon core is dead, they are still good for zombie mods, you cut them in half and solder cooper rails or plates of copper sheet to a GPU with a weaker or more restricted VRM and use the power delivery from these cards, and yeah, you can use multiple at once, Buildzoid, another overclocker, had 3 extra GPU zombies powering an R9 290x, Lol, it was a monster! But EVGA also sells power boards by themselves but they are very expensive (I bought one on sale, the Epower V board), but the classified cards are often cheap enough to make the hacking worth it.
      The reason for 3x 8 pin is Nvidia says the connector is only good for 150w, so they are obligated to put “450w capable” on it, even if those connectors are proven to handle 800-1000w each the official spec says 150w and it’s a weird Nvidia requirement (even though these cards are notorious rebels against the contract like the unrestricted nature of distributing unlocked BIOS)

  • @anasevi9456
    @anasevi9456 4 года назад +11

    Thanks for another great video. Fermi and GCN 1 holding out well which is to be expected. People are using the better nvidia current gen results as some sort of harbinger that they will always be better at 'raytracing' even when there is no bespoke function use, it really is just the game engine and typical amd's lax DX11 support when its not a big named release. How they have let older api's slide off a cliff over the last few years in drivers is not good.

  • @dlbutters7164
    @dlbutters7164 4 года назад +1

    Those are some beautiful looking cards, i love their designs so much! Breathtaking!

  • @philipn8576
    @philipn8576 4 года назад +5

    I’m surprised how well this benchmark runs.
    I have a 4790k and a GTX 1060. I got a score of 3792 @ 1080p

    • @WyattOShea
      @WyattOShea 4 года назад +1

      Core i7 5820k @4.4ghz with a gtx 1080ti here got a score of 9138 with a few things open (steam,many tabs on firefox etc) so might be impacting my own performance slightly.
      EDIT:Forgot to add this was also at 1080p ultra settings too.

    • @WyattOShea
      @WyattOShea 4 года назад

      Just redid the test again after upgrading to a ryzen 3700x and 3600mhz tridentz neo ram and got a 9250 score so not much difference there but in games there is a pretty good difference though over the 5820k I was using with 2666mhz ram.

    • @hardstylboy
      @hardstylboy 9 месяцев назад

      @@WyattOShea i have testded 3060 12 gb i9 12900k also on 1080 ultra and have 10140 score

    • @WyattOShea
      @WyattOShea 9 месяцев назад +1

      @@hardstylboy Noice I've forgotten what this even is though tbh haha been a long time since I commented on this.

    • @hardstylboy
      @hardstylboy 9 месяцев назад +1

      @@WyattOShea ja cool he i was forget this demo now i can run in it smoothe thanks for you commented greets from Holland

  • @uranium5694
    @uranium5694 4 года назад +2

    LOL Nice!!!! I will test my 750 Ti with this benchmark as soon as I get the free time!!!!

  • @Mr371312
    @Mr371312 4 года назад +3

    Remember they had planned dedicated PhysX hardware, but we know where that went by the lack of PhysX cards. I wonder if dedicated RTX cards will go the same way?

  • @adi6293
    @adi6293 4 года назад +1

    Fermi was a good architecture in my opinion, I am an AMD/ATi fan but I chose GTX480 over the HD6xxx cards due to nVidia's superior DX11 performance

    • @KyriaxWitch
      @KyriaxWitch 4 года назад

      my mistake not choose Nvidia at the time, i bought the hd6990 instead of 590 because i have good times with my ati hd3870 so i expected the same with newer gen...... I was so disappointed I buried the video card in my garden 1 year later, after too much Headache with drivers that didn't work properly on windows 7 for nothing in the world.
      R.I.P ATI

  • @jpmon1846
    @jpmon1846 4 года назад +1

    Awesome video!! Nice to see a channel that appreciates classic hardware.

  • @PixelPipes
    @PixelPipes 4 года назад +5

    Not as great as my trusty Radeon 7500 could've done, but hey, not bad!

  • @Joeeye123
    @Joeeye123 4 года назад +1

    Got 4463 at 4k ultra settings which is a huge drop off from 12165 at 1080p, system is an I7 8700k, RTX 2080 Ti SLI

  • @StaticVapour590
    @StaticVapour590 4 года назад

    Wow, TeraScale is really done now.. Tahiti holds up

  • @DunhaCC
    @DunhaCC 4 года назад

    Holy cow the GTX580 has 2x8 and 1x6 pin connectors😱😱

  • @Snoopmasta
    @Snoopmasta 4 года назад +1

    Thx for the test! Had to download it myself now. XD Got 10677 points with the GTX 1080 in your tested resolution.

  • @Ammageddon89
    @Ammageddon89 4 года назад +1

    7500 points @ 1440x900 ultra with my vega 56 and a 6800k@4ghz

  • @mileskosik472
    @mileskosik472 4 года назад +1

    Thanks for doing this. I was planning on doing this, but now I don't have to dig up my old gpus.

  • @lm_dccxl4078
    @lm_dccxl4078 4 года назад +1

    Maybe I should do what DF does, a crt and 1024x768 resolution with this raytracing to my old R7 370 xD

  • @kanopi07
    @kanopi07 4 года назад +1

    Excellent Video as always, would like to see a solo video on the 390's performance :)

    • @jerrybandy3827
      @jerrybandy3827 4 года назад

      I have the "aging" R9 390X with 8GB of ram. I'm not sure if it's the same as what you tested here. I know it's not the same brand.

    • @F2FTech
      @F2FTech  4 года назад

      390 vs. 970 is on the list

  • @eizomonitor6003
    @eizomonitor6003 4 года назад

    RX 580 @ 1080p with very high setting 5338 points. Actually with high setting if would be available , can do around 60-100 fps . My estimate based on my experience.

  • @oldaccount9190
    @oldaccount9190 4 года назад +1

    they say 22 frames unplayable but if youre poor 22 frames is lke 60fps

  • @GoldSrc_
    @GoldSrc_ 4 года назад

    Great to see the old girls still manage to do some lifting.
    I still haven't tried it on my old GTX-650 but I can predict it will hate me xD.

  • @nojoojuu
    @nojoojuu 4 года назад +1

    Thanks! Info was great!

  • @furynotes
    @furynotes 4 года назад

    That crytek engine just destroys the amd fine whine argument on those amd cards. For the most part.

    • @likeclockwork6473
      @likeclockwork6473 4 года назад

      Nah the fine wine thing was really exclusive to GCN. If a HD5870 loses to Intel integrated there is an incompatibility problem with the code.

  • @nfreddyyy
    @nfreddyyy 4 года назад

    Damn another good video. Keep up the great work! It's always interesting to see what it'd be like on older hardware. Now with this new software based RT is there games that can utilise it already?

  • @blackpete
    @blackpete 4 года назад +4

    comparing my 980 would be interesting, but not fair. Its bios modded and on water. Do yo have a stock one, you can throw in? Great video, as allways.

    • @F2FTech
      @F2FTech  4 года назад

      Wish I did. Still need to get another Maxwell card. Would like a 970 or 980 - Thanks.

    • @Ethan-rz6cx
      @Ethan-rz6cx 4 года назад

      F2F Tech would you be interested in a 1050? I’m upgrading and won’t need it, but I’m broke so you can’t have it for free

    • @SFearox
      @SFearox 4 года назад +1

      980Ti score 9842

    • @blackpete
      @blackpete 4 года назад

      @@SFearox stock? Or oc? Power limit raise? Thx!

    • @SFearox
      @SFearox 4 года назад +1

      @@blackpete I have the Gainward 980Ti Phoenix GS OC'd to 1.5ghz :) No bios mod or power limit raise

  • @heldenkatze
    @heldenkatze 4 года назад

    nice, thanks for the benchmarks

  • @KARAOTI23
    @KARAOTI23 4 года назад +1

    Just for comparison, my oc'ed RX570 4GB Nitro+ at 1366x768 scored 6041.

    • @F2FTech
      @F2FTech  4 года назад +1

      Awesome. Thanks 🙏

  • @claucmgpcstuf5103
    @claucmgpcstuf5103 4 года назад

    very interesting yeeeeeeeeeeessss !!!

  • @reallybigmistake
    @reallybigmistake 4 года назад +1

    wish you had tested the 1070 Ti, my card lel

  • @SFearox
    @SFearox 4 года назад

    3770K + 980Ti score 9842, same settings as in the video :)

  • @kevinragsdale6256
    @kevinragsdale6256 4 года назад

    I cant ever oc my gpu more than 100mhz on games, but on that neon noir I could get 140mhz.

  • @SireDragonChester
    @SireDragonChester 4 года назад +1

    Wow no 980TI benchmark? Or 1080Ti?

    • @F2FTech
      @F2FTech  4 года назад +1

      Wish I had my old 980 Ti. Still building the newer card collection 😅

    • @SireDragonChester
      @SireDragonChester 4 года назад +1

      F2F Tech
      Ya my first high end gpu was EVGA GTX 980TI SC, after my older gtx 670 FTW was starting show its age, I was able Finley buy gtx 980ti, I loved that card. Was beastly card for it’s time. Then had fan motors start to go and ball bearings, with less 10 days left on warranty. My store couldn’t fix it. And they let me upgrade to msi GTX 1080TI, still miss the old 980ti. But happy with 1080ti. Not fan of RTX as they way over priced here in Canada. ( RTX 2080ti is almost $1600-1700 Canadian) not worth IMO
      But good video. And thanks for da quick reply, Keep up good work :)

  • @ExMachinaEngineering
    @ExMachinaEngineering 4 года назад +1

    I bought a HD7970 GHz about a year ago for $60... Sorry, just had to say it...

  • @Fin4L6are
    @Fin4L6are 4 года назад +1

    cool

  • @ForceGMs
    @ForceGMs 4 года назад

    Windows 7 is about to dye and windows 10 is DX12 so how this going to work in the future?

    • @michalzustak8846
      @michalzustak8846 2 года назад

      Windows 10 can run DirectX 11, 10 and lower games just fine.

  • @hardstylboy
    @hardstylboy 9 месяцев назад

    i9 12900k 16 gb ddr4 3060 12GB 1080 res ultra settings have 10140 score

  • @DukenukemX
    @DukenukemX 4 года назад +1

    Now try and test this in Linux since AMD's OpenGL drivers are better optimized there.

    • @KayX291
      @KayX291 4 года назад +1

      cept this would be a problem with older AMD GPUs if it's not a GPU with GCN 1 architecture or newer with AMDGPU kernel driver, there will be problems.

    • @DukenukemX
      @DukenukemX 4 года назад

      @@KayX291 Why? is it because it doesn't have Vulkan or because it maybe limited to OpenGL 3.3? Though I think a Radeon HD 5850 can do OpenGL 4.1 I think. Not sure.

    • @KayX291
      @KayX291 4 года назад +1

      @@DukenukemX correct me if i'm wrong but i think it's because of lack of attention for improvements in comparison to the recent AMD GPUs.

    • @DukenukemX
      @DukenukemX 4 года назад

      @@KayX291 As someone who owns a number of HD 5000 and 6000 GPU's, I can tell you that AMD gave up on them a long time ago. Unlike Nvidia who actually supports their older GPU's, even giving them Vulkan and DX12 support.
      On Linux it's a different story where older AMD GPU's do get support, but GCN GPU's can use AMDGPU driver while older GPU's are stuck with the R600 driver. AMD gave up on R600, but the code is open source and anyone can work on it, and some do. The HD 6000 cards for example don't have hardware fp64 and are therefore stuck at OpenGL 3.3. Someone is working on a soft fp64, which should upgrade it to OpenGL 4.1 or even higher. There's was interest in someone making a Vulkan driver but that would be a lot of work with very little payout.

    • @jakegarrett8109
      @jakegarrett8109 4 года назад

      Demetrius Lolos but on Linux when I tried to install drivers for my Titan Xp it F’ed up my OS install... Nvidia Linux drivers are $hit, but I kind of forgot about it until I tried it and went “damn, that’s why I always went Radeon for Linux, oh yeah!”.
      But even windows 10 it’s not good, for like 6 months my Titan was unuesable, it would crash every game after like 20 minutes but did fine in benchmarks hour after hour so I though it was just windows crapping itself until I tried it on several other computers. For example, a weekend gaming marathon at my cousins house we played Fortnite for 5 hours straight and I crashed 3 times and he crashed 7 times, he has a GTX 1080. So with an average crash of one of us blue screening or picture turned black with audio but obviously not playable blind, we got 30 minute average between one of us crashing, I think out of 15 matches we finished 2... ). You know how often my 4x Crossfire Fury setup crashes? You wanna guess like maybe three times a year, probably less? Yeah that’s like 1 day for Nvidia... now, does the Titan work fine like 2 years after pascal launched? Sure, it hasn’t crashed a single game since, but damn is $1200 sitting on a shelf in a closet being unusable annoying! It wasn’t even an upgrade over the 2x Fury I used back them. Oh yeah, and this wasn’t a day 1 card, this was the Star Wars collectors edition so Pascal has been out for like a year because I wanted to avoid those day 1 “bleeding edge” problems...
      I will never use Nvidia for Linux, even if you could fix the drivers you can’t because they are proprietary (and they take a massive performance hit over windows, which honesty understandable because Linux gamers are not a majority market share, and if at least like the drivers to work in one OS, cough Titan and Windows experience, cough). So don’t assume they would win on Linux vs Radeon, I think even the open source community made Radeon drivers will dominate, you’re assuming the support is anything like windows and it’s not (Radeon in the other hand really pushes open source forward, and a lot more open with the code).

  • @mdrumt
    @mdrumt 4 года назад

    UHD textures are all messed up and pop in and out, ick.

    • @geerstyresoil3136
      @geerstyresoil3136 4 года назад

      yea, the scores for that should be invalid, the textures were missing all over the place...

  • @Krisztian5HUN
    @Krisztian5HUN 4 года назад +1

    Terrascale crippled by old drivers.

    • @Krisztian5HUN
      @Krisztian5HUN 4 года назад

      @I like life this bech score is clearly driver issue (as always) the 6970 is faster than the 5870 not the same....

    • @Krisztian5HUN
      @Krisztian5HUN 4 года назад

      @I like life i really dont care about the fermi cards, but this terrascale performance clearly is a driver issue 5870 vs 6970 same scores....

    • @Krisztian5HUN
      @Krisztian5HUN 4 года назад

      @I like life no i dont care bcause it is clearly an old driver problem (again)

    • @Krisztian5HUN
      @Krisztian5HUN 4 года назад

      @I like life i dont care about fermi, but this terrascale performance clearly is a driver issue 5870 vs 6970 same scores....

    • @CompatibilityMadness
      @CompatibilityMadness 4 года назад +2

      ​@@Krisztian5HUN VLIW SUCKS in compute, We all should know that by this point. However, if you could write a RT optimised driver for them to prove us wrong (ie. "driver's fault") - be my guest. Scores are what you see, if you don't like them - that's your problem. HD 6970 is VLIW4, and HD 5870 is VLIW5, assuming only one or two SPs be used in this benchmark (per SPU) - it doesn't matter that 6970 should be faster - it can't use it's resources efficiently, hence similar performance to 5870.

  • @Matthewv1998
    @Matthewv1998 4 года назад

    Poor terascale.

  • @OneCosmic749
    @OneCosmic749 4 года назад

    Fermi or the Thermi? :)

  • @Abelamm
    @Abelamm 4 года назад

    more..

  • @keganjackson7609
    @keganjackson7609 Год назад

    Wana watch this not Alix pass 😂

  • @mikelimtw
    @mikelimtw 4 года назад +4

    NVIDIA's raytracing solution and their RTX cores were a solution searching for a problem. It's clear that NVIDIA's implementation is critically flawed as it produces very little visual benefit at a huge hit in performance. Crytek's Neon Noir demo shows that raytracing can be done (on DX11!) using current hardware without dedicated raytracing support, and it can do it well. It goes without saying that implementations supporting the newer DX12 and Vulkan APIs would only boost the performance more. NVIDIA laid a goose egg and they conned a lot of people out of money for their little science experiment. There is no way for NVIDIA's raytracing to become the standard as the industry will standardize around AMD's hardware implementation based on NAVI architecture due to the imminent arrival of Xbox TWO and PS5 in 2020. The lackluster support in games at this point is pretty much the nail in the coffin.

    • @DJRYGAR1
      @DJRYGAR1 4 года назад +3

      rtx card do perform even better. Crytek mentioned that when available, they do use RT cores to speed things up even more. Remember this is just 760p test. If anything, it's MS DXR that sucks balls. Crytek made their own algos for RT. Hardware can be used in both APIs. In fact, it can be used to do many things, like speeding up physics. Stop spreading BS, you have no idea what you are talking about.

    • @arenzricodexd4409
      @arenzricodexd4409 4 года назад +3

      The one that lock RT to DX12 is MS not nvidia. and there is no such thing "nvidia standards or AMD standards" when it comes RT implmentation. DXR exist so game developer does not need to go that route. nvidia RTX will continue as a viable solution to accelerate RT performance even when next gen console which is AMD based coming out. When 8th gen console comes out did AMD starts to dominate nvidia at tessellation workload? You guys keep praising the crytek demo and yet never care to learn what cutting corners that crytek did to get the performance. Even crytek themselves admit that nvidia RTX when used will significantly improve performance and you don't need to do tricks here and there like they did.

  • @micaellanoeric4122
    @micaellanoeric4122 4 года назад

    No need for RTX...

  • @mat8448
    @mat8448 4 года назад +1

    'Noir' is not pronounced 'noo are' its 'know are' just so you are aware before you say something similar to thousands of people again.

  • @tHeWasTeDYouTh
    @tHeWasTeDYouTh 4 года назад

    Crytek should have stuck to PC only and made Crysis 2 and 3 as game breaking as the original game. I hate consoles so much. remember when the Crytek owners said free to play was the future and two years later they closed almost all of their worldwide studios. they also sold far cry to ubitrash...

  • @weetjewatikwil1
    @weetjewatikwil1 4 года назад

    This video does not make any sense 🙄🙄