bazhenovc
bazhenovc
  • Видео 6
  • Просмотров 30 346
Esoterica new renderer - mesh rendering pipeline
ERRATA: MeshCluster struct is actually 32 bytes!
Slides: docs.google.com/presentation/d/1AkQjqTTkBDgFVn_dYlAvahpbuUtCtTJb3qXdK6Bv_lI/
GPU-Driven Rendering Pipelines: advances.realtimerendering.com/s2015/aaltonenhaar_siggraph2015_combined_final_footer_220dpi.pdf
Nanite: advances.realtimerendering.com/s2021/Karis_Nanite_SIGGRAPH_Advances_2021_final.pdf
Alan Wake 2: www.remedygames.com/article/how-northlight-makes-alan-wake-2-shine
GeometryFX: github.com/GPUOpen-Effects/GeometryFX
Просмотров: 1 920

Видео

The Sane Rendering ManifestoThe Sane Rendering Manifesto
The Sane Rendering Manifesto
Просмотров 7 тыс.10 месяцев назад
Gist for discussions: gist.github.com/bazhenovc/c0aa56cdf50df495fda84de58ef1de5e
Grizzly bear (Xawas)Grizzly bear (Xawas)
Grizzly bear (Xawas)
Просмотров 216Год назад
Why you should _still_ never use deferred shading (follow-up)Why you should _still_ never use deferred shading (follow-up)
Why you should _still_ never use deferred shading (follow-up)
Просмотров 2,7 тыс.Год назад
This is a follow-up to my previous video that tries to address some of the raised concerns. Slides: docs.google.com/presentation/d/1-wu7DxmiIxHK7n5_nBElIhU8xnD2cf44FD28F-zsbwM/edit?usp=sharing Links: interplayoflight.wordpress.com/2020/11/11/what-is-shader-occupancy-and-why-do-we-care-about-it/ meshoptimizer.org/ diaryofagraphicsprogrammer.blogspot.com/2018/03/triangle-visibility-buffer.html
Why you should never use deferred shadingWhy you should never use deferred shading
Why you should never use deferred shading
Просмотров 18 тыс.Год назад
Personal and strongly opinionated rant about why one should never use deferred shading. Slides: docs.google.com/presentation/d/1kaeg2qMi3_8nQqoR3Y2Ax9fJKUYLigPLPfdjfuEGowY/edit?usp=sharing Links: github.com/zeux/meshoptimizer vkguide.dev/docs/gpudriven/gpu_driven_engines/ vkguide.dev/docs/gpudriven/compute_culling/ newq.net/dl/pub/SA2014Practical.pdf advances.realtimerendering.com/s2020/Renderi...
Crow watching for fledglingsCrow watching for fledglings
Crow watching for fledglings
Просмотров 804 года назад

Комментарии

  • @senkrouf
    @senkrouf Месяц назад

    Alien isolation runs on my toy laptom, doom 3 / quake 4 doesnt run on my toy laptop. Why? I know that alien isolation uses deferred rendering and Doom 3 and Quake 4 do some stupid shit. My laptop is 0.3 TFLOPs and 128VRAM.

  • @helviett
    @helviett Месяц назад

    Hello! Decided to visit this presentation again to set up a proper Forward Renderer for our upcoming project. I'm wondering what's the advantage of using RGB32_UINT instead of 3 separate targets each with specific format (R11G11B10_F, R16G16_(F|SNORM), R32_UINT)? How alpha blended is performed into such render target? How bilinearly sample from it? Does it make reading from such target less cache efficient? Seems like I'm missing some crucial knowledge.

    • @bazhenovc754
      @bazhenovc754 Месяц назад

      You will not get correct results either way, gbuffer normals are not color and if you use alpha blending on that you will get random results and it will look bad. This is the same issue with deferred shading as well - you can't alpha blend normals and material properties, that data is not color. So with that regard, the texture format doesn't matter that much. If you need bilinear sampling you can split some stuff into separate filterable render targets, but try to keep the overall bandwidth under control (i.e. 128 bits in total, use 3 MRT 32bits color filterable, 32bits normal filterable, 64bits packed non-filterable) but keep in mind MRT is going to be a bit more expensive even if the overall bits per pixel is equivalent. Same goes for alpha blending, if you have a ton of translucent geometry it might make sense to keep the color separate with alpha-blendable format. For transparency you essentially have 2 options: 1 - separable blending, color target uses alpha blending, all other targets replace the data. This will make the color appear blended, but AO will be using the translucent front surface as input and AO won't be visible behind it. 2 - only use alpha blending and don't touch gbuffer normals at all. This will make the color appear blender, but AO will be using the opaque surface behind it as input - AO will be visible behind the translucent surface, but won't be applied to the translucent surface itself. Same goes for DOF and other that uses gbuffer inputs in one way or another. Depending on your game, you can use either 1 or 2 or both - the difference will be in how these things interact with post processing (different types of artifacts). Experiment with that and find the right combination that works for your game.

  • @Felipe-rn1gf
    @Felipe-rn1gf Месяц назад

    Minecraft bedrock

  • @KaledTV
    @KaledTV Месяц назад

    >Decouple your game logic update from the rendering code Braindead fighting games developers need to listen.

  • @KaledTV
    @KaledTV Месяц назад

    Great video

  • @dahahaka
    @dahahaka Месяц назад

    I agree with most of what was said but.... "Nobody cares about photorealism and extreme fidelity" what? Why would you think that?

  • @GadgetGamesAU
    @GadgetGamesAU 2 месяца назад

    Allowing for custom BRDFs in Forward+ is greatly underrated. No serious graphics programmer should be happy with Disney-GGX'ing everything like most modern deferred games. You also missed talking about bloom and AO, which are everywhere and big enough to mention in the pipeline overviews IMO. True forward -> SSAO is bad as lacks normals, needs Depth+Normals. Deferred = already guaranteed everything for it.

  • @homematvej
    @homematvej 2 месяца назад

    Ahaha, dude never heard about tiled rendering architecture. Everything he says here is only relevant for immediate mode architectures. So Use deferred for tiled architectures.

    • @bazhenovc754
      @bazhenovc754 2 месяца назад

      @@homematvejif you have any kind of complex post processing, you are going to move the gbuffer off tile, with forward rendering this is 2 render targets less than deferred. The post processing is unchanged in both cases. By “complex” I mean effects that need to access adjacent pixels - any kind of antialiasing for example. Sure you can run simple stuff like tone mapping on tile, but then your AA quality will suffer if you run it post tonemap. I can assure you that I’ve heard about tile based architectures and I can also point you to vendor documentation that clearly recommends forward shading - community.arm.com/arm-community-blogs/b/graphics-gaming-and-vr-blog/posts/killing-pixels---a-new-optimization-for-shading-on-arm-mali-gpus You need to learn more about the hardware, TBDR in particular.

  • @Jetman082
    @Jetman082 3 месяца назад

    Funny enough I’m actually using deferred rendering to upgrade an old game called gunz the duel. I’d rather used forward+ but not willing to upgrade the games entire engine from direct3d9 😂

  • @yggdrailongicorn
    @yggdrailongicorn 3 месяца назад

    Fantastic work. I have look into a bunch of Mesh Cluster topic and finally found a Mesh Cluster solution that take SkinnedMesh into account.

  • @oomegator
    @oomegator 3 месяца назад

    TAA and ai upscaling "technologies" is a plague on modern gaming.

  • @breadtoucher
    @breadtoucher 3 месяца назад

    Any timeframe when can we expect the source release? I am super interested. :) Merci

    • @bazhenovc754
      @bazhenovc754 3 месяца назад

      Some time next year, I'm working on this in my free time and unfortunately I don't have as much free time as I'd like to.

  • @Mireneye
    @Mireneye 3 месяца назад

    I'm very curious since I have my own big gripes with deferred rendering. But at the same time almost all modern engines and a huge portion of games released use it. What's your take on why it should never be used?

    • @bazhenovc754
      @bazhenovc754 3 месяца назад

      I have a separate video about deferred shading: ruclips.net/video/QVbOp1h-Jb4/видео.html

    • @Mireneye
      @Mireneye 3 месяца назад

      @@bazhenovc754 Amazing man! Thank you for taking the time out of your day to link it, I look forward to watching it.

  • @vtruo64
    @vtruo64 3 месяца назад

    Great video, looking forward to see how this progresses :). Btw, what does CTF stands for?

    • @bazhenovc754
      @bazhenovc754 3 месяца назад

      Compute triangle filtering, here is the link: github.com/GPUOpen-Effects/GeometryFX I'll add it to video description.

  • @haochen4826
    @haochen4826 3 месяца назад

    Can I post this video to bilibili? I'll cite the source

    • @bazhenovc754
      @bazhenovc754 3 месяца назад

      Yes, I have no problems with that.

    • @haochen4826
      @haochen4826 3 месяца назад

      @@bazhenovc754 Thank you very much, this video is very enlightening

    • @haochen4826
      @haochen4826 3 месяца назад

      @@bazhenovc754 Thank you very much, this video is very enlightening

  • @TsutsuYumeGunnm
    @TsutsuYumeGunnm 3 месяца назад

    Point when graphics programming stopped being fun, and start looks alike some corpo-banking-high-load-database software😔

    • @bazhenovc754
      @bazhenovc754 3 месяца назад

      I did the boring part for you so you can focus on writing fun shaders :)

  • @qendolin
    @qendolin 3 месяца назад

    While I don't understand even half of the techniques mentioned, I still very much appreciate the video. You've made me aware of more things to learn about. Also the references are much appreciated!

  • @bazhenovc754
    @bazhenovc754 3 месяца назад

    Video has an error: MeshCluster struct size is actually 32 bytes, not 64.

  • @lanchanoinguyen2914
    @lanchanoinguyen2914 4 месяца назад

    i don't like deferred shading because i see it is verbose,although it might be more efficient.i,'m making hard decision to do or don't implement deferred shading because it also change my shader alot.

  • @lanchanoinguyen2914
    @lanchanoinguyen2914 4 месяца назад

    CPU frustum culling is damn cheap but draw calls aren't cheap so unless your system is 100% running on GPU,there is no reason to have GPU frustum culling.

    • @ahslanabanana
      @ahslanabanana 3 месяца назад

      Indirect draw calls are a prerequisite here. You basically have an array of draw call structures and a counter. A compute shader does the culling test and, if passed, atomically increments the counter and writes the draw call values. Then it's submitted for execution using a SINGLE cpu call (e.g. ExecuteIndirect on DX12)

  • @abobe5572
    @abobe5572 4 месяца назад

    Actually, you should never use TAA

  • @miccallwang3164
    @miccallwang3164 5 месяцев назад

    现在我们的图形程序开始着手准备混合渲染管线,在一些标记的物体上使用前向渲染而不绘制到Gbuffer ,并在绘制灯光之后,在把前向渲染pass绘制一遍

  • @Mallchad
    @Mallchad 5 месяцев назад

    You would not believe how many "performance" issues I've improved or fixed completely by working on things like input refresh rate, frame pacing and just generally keeping the refresh rate of things high and low latency. Whilst I don't necessarily think things like temporal super resolution and dynamic resolution is always bad they introduce so many other problems that didn't need to exist in the first place. This is also particularly bad in the mobile and VR industries where every game runs way slower than it has any right to be

  • @GeorgeTsiros
    @GeorgeTsiros 6 месяцев назад

    Are AMD's GPUs really _that_ unpopular? o_O

  • @GeorgeTsiros
    @GeorgeTsiros 6 месяцев назад

    do you happen to have a link for that doom eternal presentation? (is it a talk, or only a set of slides?)

    • @bazhenovc754
      @bazhenovc754 5 месяцев назад

      The link is in the description: advances.realtimerendering.com/s2020/RenderingDoomEternal.pdf

    • @GeorgeTsiros
      @GeorgeTsiros 5 месяцев назад

      @@bazhenovc754 I saw that after a while, it is not a very helpful pdf :( There should be a talk somewhere

  • @oberguga
    @oberguga 6 месяцев назад

    What if I render 240Hz and use TAA for 4 frames? It still same than 60Hz in terms of lag and better in look.

    • @bazhenovc754
      @bazhenovc754 5 месяцев назад

      If you are rendering at 240Hz you can just jitter frames, don't even need to do history blending. At that framerates your brain is doing the blending for you, here's a shadertoy demo with the effect (don't forget to change the shader parameters as the source code comments say!): www.shadertoy.com/view/tl2fRy

  • @wo262
    @wo262 7 месяцев назад

    Working with VR we don't use anything with temporal accumulation of any sort. Yet this manifesto's very dogmatic. Particularly nowadays that anything temporal in games is looking way better than it did before, and better each year. This is never adressed, like we're still in 2015 in terms of TAA, ghosting or smoothing. The manifesto never asks for better temporal, even though it has been improving. It just asks for no temporal at all. The problem with spatial 1frame filtering, for specular or unified sampling, is that since temporal data would be a no-no it could look unstable from frame to frame even if it looks good in stills (bad per the manifesto). Meanwhile, with spatiotemporal reconstruction, the shared visible parts of an image can look so clean and native even in motion, because the data isn't speculative (the valid samples were real, just taken before), because the stability and lack of shimmer, and because the unified sampling can make textures look even better if sampled from higher res mips. However, spatial 1frame filtering is essentially what's happening in camera cuts and dissocluded parts with reconstructors like DLSS and the bunch, even if they accumulate the rest. This aspect of rendering has been getting better looking on its own right, all while getting less distracting than the cut or motion itself, and with a smoother and faster transition to convergence in subsequent frames. So it's the best of both worlds, there's no reason to do away with. Temporally reconstruct the parts you can, render and filter as best you can the others, make the transition from one to another as fast and non distracting as possible. Improve on all fronts. It's very easy to ask for things like the AA to be replaced, but for many other graphical features or for the concept of unified sampling with all of the advantages it offers, the only proposal here is "just don't use them" with no alternative offered. The burden of offering alternatives should be on the one asking to kill accumulation.

  • @exotic-gem
    @exotic-gem 7 месяцев назад

    I hope Jonathan blow sees this, I think he would agree with most of it and have interesting thoughts and things to add ! About ray-tracing : having played Minecraft extensively with it, most of us were more than willing to trade render distance for better GI. On the other hand, lower frame rates became even more headache inducing than before, suggesting that trade was not worth it.

    • @Mallchad
      @Mallchad 5 месяцев назад

      All games *have* to use since kind of ray tracing because it's the only way test for real shadows in general. the difference is real games test them ifrequently

  • @majormalfunction0071
    @majormalfunction0071 7 месяцев назад

    MSAA is something I considered for my game and engine. Visibility Buffer shading is a boon here. Basically, it's deferred shading with draw call information like vertex ID and instance ID, and draw call ID, in one 32 or 64 bit render target (the Triangle ID in papers). You can run the vertex shader one per unique sample, because you're already running the vertex shader per pixel. You eat overdraw while writing Triangle IDs, where no material or lighting code is evaluated. Read "Efficient Virtual Shadow Maps for Many Lights". It offers shadows for all lights. They use clustered shading to compute the minimum set of lights and shadow map pairs.

  • @GreenDave113
    @GreenDave113 7 месяцев назад

    I came to this video with an open mind and I still think most of it is very uninformed. I agree that temporal antialiasing can make games look more blury, but that's about it. DLSS now looks very good in motion and enables more people to play nice looking games. And without temporal reuse, half of modern graphics aren't possible and our technology would have to regress back to 2010s.

    • @GreenJalapenjo
      @GreenJalapenjo 6 месяцев назад

      DLSS is nvidia-only.

    • @stacklysm
      @stacklysm 6 месяцев назад

      For the low percentage of users that have a DLSS compatible card. And (imo) the graphics upgrades from the last decade really haven't done much to AAA gaming

    • @cowclucklater8448
      @cowclucklater8448 5 месяцев назад

      I get temporal reuse for raytracing and cloud volumetrics, but what other technologies is it necessary for? I know you can do hair without it and it looks alot better than undersampling it and using temporal re use. Also you are missing the fact that taa adds ghosting, alot of it. It also adds shimmering to things that should not be shimmering. Also the blur is very strong in motion and makes the image look like an oil painting. Foliage especially lose so much detail. DLAA helps alot of these thing but it doesn't remove any of them completely. You can focus it to remove one almost completely but the other two things get worse. AI TAA(dlaa) are also VERY expensive compared to TAA and other AA options.

    • @philliptrudeau-tavara3828
      @philliptrudeau-tavara3828 3 месяца назад

      “Without temporal reuse, half of modern graphics aren’t possible” Good.

    • @michaelkreitzer1369
      @michaelkreitzer1369 3 месяца назад

      Blurry is “about it”? That’s kind of a big damn deal. As for going back to 2010, that’s just absurd. Not all techniques since then are incompatible. The first game I played that really embraced this nonsense was Talos Principle 2, and I don’t care how much the word “modern” is bandied about, that games visuals fall apart the moment the camera moves. It’s not subtle. After you stop moving you can see the fidelity slowly return and the accumulated lighting errors correct as it catches up over the space of a second or two. It looks _bad_, no matter how pretty the screenshots look.

  • @sixthsurge
    @sixthsurge 7 месяцев назад

    funny, I absolutely disagree regarding the temporal amortisation; for effects that are fairly smooth over time like volumetric clouds/fog, ao, etc, temporal amortisation can be magical. also I think proper TAA with per-object motion vectors, YCoCg aabb clipping and a good history resampling function like lanczos2 is great, the best AA method in terms of quality/performance (bad TAA is, of course, very bad but good TAA gets a bad rep because of it imo)

  • @Chickenkeeper
    @Chickenkeeper 8 месяцев назад

    Thanks for addressing the shader permutation issue, the modern trend of using tens of thousands of shaders that need to compile on the fly to compensate for inefficient art practices when just a couple of hundred could achieve the same result is a bit of a pet peeve of mine lol

  • @bits360wastaken
    @bits360wastaken 8 месяцев назад

    Doing everything in 1 frame and just telling people to hack around it with ai, and absolutely nobody gets raytracing unless they can brute force a small portion of it with their 2000$ graphics brick looks like you just didnt do any research and just hate TAA. But plastering TAA over everything is a terrible mistake, and not what i would call sane either. I would expect something in between to be the best way to handle things.

  • @tadeohepperle7514
    @tadeohepperle7514 9 месяцев назад

    Very information-dense presentation, thank you!

  • @plaguedocphd
    @plaguedocphd 9 месяцев назад

    We can't even make a proper sky for VR in UE5 now. It's all "lumen lumen lumen nanite nanite" and such poo. They push everyone to try and make AAA games when UE4 was actually something for small developers and teams as well. We can't all be Ubisoft. I was working all day having a blast with UE4 and now with 5 I feel like I almost quit. Old techniques don't work well anymore and new techniques also don't work well...

  • @JH-pe3ro
    @JH-pe3ro 9 месяцев назад

    There's a broader discussion to be had about temporal amortization and variable rates as it has to do with gameplay logic; because games assume multiple cores now it's very common to have a strategy of buffering some number of frames behind display and interleaving jobs across frames, making an intentional tradeoff between smoothness and latency to eke out a little more core utilization. I think this can be OK if you hit a high enough refresh rate, but not in combination with rendering that drops quality or adds additional amortization steps. My biggest beef is that there's a lack of concern for precision in how the gameplay clock ticks that is making modern engines unresponsive, and it is to some extent driven by the hardware being targeted now and the amount of buffering that that's doing. I got an Agon Light, a little retro SBC, recently, and the PS/2 keyboard input, VGA output and bare-metal OS probably makes it the most responsive device I have.

  • @emidoots
    @emidoots 9 месяцев назад

    Interesting talk, thank you for sharing it!

  • @breadtoucher
    @breadtoucher 9 месяцев назад

    Fantastic! Thanks for this. Situation with graphics can be really depressing. When I look at some AAA game projects that were released 10 years ago, they are few orders of magnitude smaller in terms of amount of code and tech in them, yet they still hold up pretty well today... All because of combinatorial explosion of complexity in rendering and surrounding systems. I liked everything you brough up in this presentation. Makes me consider many design choices.

  • @gsestream
    @gsestream 9 месяцев назад

    so you dont like pre-baked lighting or even on-demand async lighting, which is not calculated tied to the frame.

    • @donovan6320
      @donovan6320 7 месяцев назад

      He doesn't like on demand async lighting, however pre-generated lightmaps I don't forsee him having an issue with so long as it is guaranteed loaded and only applied to static geo before rendering, the bigger issue seems to be things not loading at runtime and so things popping in, looking unresponsive and then popping in and out or being out of sync He never said you can't precompute things, more that runtime calculations of things that cannot be done in a single frame blended over multiple frames just looks terrible and should not be done at runtime, which is true.

  • @denisanisimov7036
    @denisanisimov7036 9 месяцев назад

    Последнее время слушаю русскоговорящих на английском языке. Странные ощущения надо сказать! Не думал, что знание английского пригодится мне в жизни для такого.

  • @moelanen7363
    @moelanen7363 9 месяцев назад

    I think your observations are correct, but I dislike this approach of outright banning certain approaches to problems, instead of just saying what the problems are and explaining what causes them, and why current solutions are insufficient. Inherently there is nothing bad about temporal amortization, but as with everything, bad implementations are bad. Saying that a whole family of techniques are banned, because of poor implementations of TAA in the past, is foolish. I don't think it's correct to say "Do not use temporal amortization", when what you seem to have is an annoyance with how consistency across frames appears to be secondary to the quality of still pictures. Similarly, your second rule on only targeting higher refresh rates falls flat when you immediately say that targeting lower rates is actually not a problem. Setting up this rule seems incorrect, when again to me, your problem seems to be with studios focusing on frame budgets, instead of input latency. Under this rule, you'd think that frame generation is fine since it helps you achieve higher rates, but as you say, the actual problem is input latency.

    • @donovan6320
      @donovan6320 7 месяцев назад

      What he is saying is Target at least 60 FPS or the users refresh rate, whichever is lower. You can go higher but never go lower

  • @snowwsquire
    @snowwsquire 9 месяцев назад

    I don't see why raytracing has to be computed in one frame, accumulation over time for mostly static scenes looks better with no real downsides, and dynamic resolution is fine

    • @charlieking7600
      @charlieking7600 7 дней назад

      Have you played Minecraft Bedrock with raytracing? It doesn't handle lighting correctly, each light lefts for several seconds when source is removed.

    • @snowwsquire
      @snowwsquire 6 дней назад

      @@charlieking7600 1 bad implementation does not mean the technology is fundamentally bad

  • @DrTheRich
    @DrTheRich 9 месяцев назад

    According to your rule three, Lumen would not be allowed to work. Yet people clearly like playing games that are build with it. Because people DO care about photorealism (depending on context)

    • @MegariskyYT
      @MegariskyYT 7 месяцев назад

      You have to realize that the average person is a sludgedwelling numbskull

  • @winterhell2002
    @winterhell2002 10 месяцев назад

    If you think how Crysis 2/Battlefield 3/Bioshock Infinite can run at 1440p native on a GTX 660 Ti, it becomes absurd why you'd run a game at a lower native resolution on 10 years newer hardware.

  • @4AneR
    @4AneR 10 месяцев назад

    I don't think we need a strict manifesto like "don't do X because I think it's bad". Yes, it's bad, everyone knows. But we need to invest in developing something better than TAA and dynamic resolution stuff. The temporal information is free to utilize, we just need a better algorithm (or statistically trained AI to be honest), that will solve the current artifacts. So I suggest instead do MORE of research on upscaling/prediction/temporal reusing, because the display resolution and frequency are increasing, while the CPU/GPU interop is still a major issue

    • @locinolacolino1302
      @locinolacolino1302 10 месяцев назад

      I think we've pushed our luck with AA for too long and have to face the simple fact: we cannot generate data where there is none. I was wondering a while ago why you get AA for free with path traced images, until I realised there are 50+ samples per pixel so it's basically 50x SSAA. TAA attempts to get the extra samples required for SSAA from adjacent frames whilst neglecting the integrity of motion, and FXAA is just a post processing affect that basically applies a smoothing filter to edges: absolutely no substitute for the real thing. The only real options are MSAA, whose image improvement vs performance impact scales rather nicely. Or AI upscaling/frame generation featuring a temporal component, which could do with some improvement, particularly with AMD.

  • @eclipsegst9419
    @eclipsegst9419 10 месяцев назад

    Thanks for making this! It's a sad state of affairs we are in, when Crysis 3 at 11 years old has better looking vegetation than the majority of new releases. Deferred rendering was a crappy crutch for last gen consoles and forward+ aka clustered is what everyone should be using at this point.

    • @DrTheRich
      @DrTheRich 9 месяцев назад

      Unreal Engine 5 does look better than crisis 3 by miles, and it does break some of these rules.

    • @eclipsegst9419
      @eclipsegst9419 9 месяцев назад

      @@DrTheRich Texture resolution? Sure. Gloss accuracy on materials? Crysis 3 still looks better. And MSAA 4x blows TAA away not to even mention 8x.

    • @DrTheRich
      @DrTheRich 9 месяцев назад

      @@eclipsegst9419 RUclips keeps deleting the lengthy reply I've typed for no reason, so you're in luck. Anyway, if you are comparing an 11 year old crysis 3 demo to a recent ue5 demo, and come to the conclusion crysis 3 still looks better, I can't help you... And no, it's not because of texture resolution.

    • @eclipsegst9419
      @eclipsegst9419 9 месяцев назад

      @@DrTheRich I said the vegetation and material handling was better. Not the entire experience. UE has a poor PBR system that's why everything looks like plastic. UE also only offers FXAA and TAA, both very crappy forms of anti aliasing. Sure Crysis 3 doesn't have RTGI, but the remaster does and CryEngine 5.7. CryEngine got SVOGI running back when UE4 had failed to make it work at a playable framerate. Now Lumen also tanks frames but they just tell you to shut up and use DLSS. UE is an arena shooter engine it chokes on open worlds unless you make them as cartoony as Fortnite.

  • @Scorpwind
    @Scorpwind 10 месяцев назад

    I wish that all devs had this mentality and awareness as to what modern AA is doing to the image. This is one of the reasons why the FuckTAA subreddit exists. You should join it, by the way. There are some devs there as well. Don't let the name discourage you. There's normal discussion that takes place there.

  • @Theinvalidmusic
    @Theinvalidmusic 10 месяцев назад

    Bravo. I'm so pleased people are finally starting to speak up about this stuff. Playing some modern games can be really fatiguing because TAA makes things look *just* out-of-focus enough to find myself reflexively squinting at them, which ain't great for eye-health.

    • @dahahaka
      @dahahaka Месяц назад

      That's really not going to affect your eye-health whatsoever, it is uncomfortable tho.

  • @brett20000000009
    @brett20000000009 10 месяцев назад

    I like this, but saying higher framerates are only good for input latency is just flat out wrong, I don't know how you come to the conclusion? higher framerates are smoother(less strobiscopic artifacts) that's a fact, higher framrates have less persistence blur(motionblur) so there are really 3 reasons why you would want a higher framerate. not 1

    • @bazhenovc754
      @bazhenovc754 10 месяцев назад

      I'm not saying high framerate is only good for input latency, I'm just putting an emphasis on input latency. I wanted to keep the video small and focused.

    • @brett20000000009
      @brett20000000009 10 месяцев назад

      @@bazhenovc754 fair enough

    • @MHjort9
      @MHjort9 10 месяцев назад

      He never said it was the only reason. He said it was the MOST important reason, which is true.

    • @brett20000000009
      @brett20000000009 10 месяцев назад

      @@MHjort9 imagine being so full of yourself that you would openly state an opinion as fact. I think they are of equal importance and input lag isn't allways tied to framerates.

    • @SaHaRaSquad
      @SaHaRaSquad 10 месяцев назад

      @@brett20000000009 Framerate is not the only factor but it always influences input latency and always will. In some games it's less important, but if you don't prioritize latency in fast-paced games you're doing it wrong, period. You can call it an opinion as much as you like, it's still the opinion of anyone who has standards.

  • @normaalewoon6740
    @normaalewoon6740 10 месяцев назад

    For me, backlight strobing is the highest priority because I can't stand sample and hold smearing. 60 fps is not enough because I can see the flicker directly on my viewsonic xg2431 monitor at a comfortable brightness. 85 fps is about right, but I prefer 100 fps. The game needs to run at this framerate without framedrops at any time and any in game event. The game needs to have an fps-limiter that can be set at any value, even decimals. A good, in game fps-limiter removes the lag and microstutter of v-sync. I cannot understand why this is so badly known among people If the game uses some kind of temporal reprojection, it needs to have a 200% frame buffer. The output needs to be 4k at minimum, even on a 1080p monitor with 1080p input. This makes the reprojection of previous frames more accurate in motion and removes most of the OLPF style blur The remaining blur needs to be managed by turning down the intensity of the anti aliasing. I rather have a bit of shimmering than a bit of blur. I also prefer unrest due to parallax disocclusion over background smearing. Unreal engine 5.4 has history resurrection built into TSR, which should adress this issue to a degree Moving objects need to output the correct motion vectors. Foliage wind sway already does this in most games. Scrolling textures and interactions need a previous frame switch and volumetric clouds need a transparent plane with a previous frame switch to tell how the clouds are moving and at what height. Ray marched objects need the right pixel depth offsets. If motion vectors really aren't an option, the material needs to be transparent and render after the motion blur pass. This removes temporal reprojection from it as a side effect. The 'has pixel animation' material flag in unreal engine 5.4 handles unpredicted motion a little bit better than before I don't think it's an issue to render things in multiple frames, as long as it's necessary for the art style and without disturbing artefacts. We should learn ourselves how to deal without temporal reprojection though and even have an off option in games that push the best TAA to its limits I like motion blur as an option, as long as it's only enabled during fast camera rotation because it sucks otherwise

    • @bazhenovc754
      @bazhenovc754 10 месяцев назад

      VSync microstuttering is an implementation issue, I don't think you need to have a CPU side FPS limiter if the renderer is implemented correctly. I'm not against FPS limiters I just don't see a lot of value in it. Using 200% frame buffer defeats the purpose of TAA, that's basically supersampling and you don't need TAA at that point. It could be a good option for high-end GPUs but as of today 20% of PC market is still using NVIDIA 10xx series GPUs so I don't see that being a viable option anytime soon. To my knowledge, nobody is implementing TAA in the way that you described. There are several games that trade off AA quality to further reduce ghosting and it's usually a fairly reasonable tradeoff that comes at the cost of image stability. I still think that we should move away from TAA and temporal amortization, it's been around for 10+ years already and I don't see a lot of improvement there, the last breakthrough was AI TAA and it seems that this is the best we can do. Even with AI there is still tons of issues on translucent surfaces.

    • @normaalewoon6740
      @normaalewoon6740 10 месяцев назад

      If an fps-limiter solves the problems of v-sync, why should we let it out? I'm talking about upscaling with 100% input and 200% output resolution. This is not supersampling, merely upscaling to a resolution beyond native (somewhat like DLSS quality + DLDSR, but more like 4x DSR + DLSS performance). Blur is a natural result of resampling over and over again in motion, but a 200% frame buffer slows this blur down a lot. Even with sub native input resolutions. The upscaling itself gets more expensive, but not too much: about 1.6 ms on my 3070 when upscaling to 4k. 4k is getting mainstream already, I only need 85 fps+ to use it with backlight strobing. Not because almost no one plays like this, but because I have never seen such good image quality and motion clarity before. It would be a shame to go back to 60 fps sample and hold, no TAA, because TAA has not been good in the past. Game developers just need to learn how to output the right motion vectors and make their games with motion clarity in mind, at least on higher end devices Of course, it is good to make games without relying on TAA as well. Fast paced games don't need a lot of detail and can do well without TAA. For dense foliage however, TAA is quite necessary for a pleasurable experience. I think it looks better by itself than no TAA, but it needs a 200% frame buffer to avoid headaches. A few traces of shimmering due to the TAA being as weak as sufficient should not be an issue. You would get them with less detail and no TAA anyway Also, I think TAA should be a personal preference for everyone. It's not something to forcibly enable or leave out completely. The last thing, unless you really know what you are doing

    • @bazhenovc754
      @bazhenovc754 10 месяцев назад

      ​@@normaalewoon6740 How exactly does an FPS limiter solve vsync stutter? You need to render consistently at your target refresh rate, if you can't render at 90/60/whatever the native refresh rate is then you need to query the graphics API for what lower refresh rates the monitor/TV supports and use that. Once the refresh rate is selected try not to change it at runtime. Using an arbitrary value as a frame limiter is plain wrong, if you just sleep on that without correct synchronization with the GPU then we're back at missing vblanks and inconsistent frame pacing. Alternatively the user can set lower refresh rate in the OS settings (on laptops or steam deck, the OS usually won't let you enter arbitrary values) and that would get reported to you as a native refresh rate and all is good in theory. Or you can disable vsync and just sleep on the FPS timing - this doesn't really solve any of your problems because the monitor cannot present frames faster than native refresh rates, so you still get bad frame pacing + inconsistent image tearing on top of that. Some users have a very strong misconception that disabling vsync makes the game more responsive (and a lot of games don't implement it correctly so this misconception is not entirely unfounded), so it's a good idea to have an option to turn vsync off and leave it on by default. Most users don't change the defaults so it's really important that defaults are good. Regarding TAA - it's an interesting take, but again I've not seen anyone else implement it the way you described and I cannot judge the quality of it without seeing some demos first. I'm open to changing my mind and reevaluating my position on stuff when presented with evidence, right now after 10 years of TAA my stance is that it's not good. Also 1.6ms on a 3070 is an exorbitant cost if you want to target 85Hz+ refresh rates - it's a very high end GPU, I've mentioned already that 20% of the PC market is sitting on 10xx series, maybe in a few years you could reasonably target 20xx min spec GPU but right now it is what it is. On a side note, I'm not saying that we should limit everything to 60Hz, I'm saying that we should not render at lower than 60Hz (unless the user specifically opts in for this). My personal preference is 120Hz, I think beyond that you're starting to see diminishing returns and it would generally be an overkill. VR needs more than 120Hz though because otherwise you get motion sickness, also you need to always present at consistent frame rate because again motion sickness. And you absolutely cannot have image tearing in VR because again motion sickness.

    • @normaalewoon6740
      @normaalewoon6740 10 месяцев назад

      ​@@bazhenovc754 When v-sync alone caps the framerate, it allows for CPU buffering. This takes some time and it causes motion problems, resulting in lag and microstutter. An fps-limiter disables this buffering completely and makes the frame pacing smooth and direct. I should have mentioned that I mostly use custom refresh rates. The default settings are too limited. Not only number wise, but also because they lack the high vertical total that I need to minimize crosstalk with backlight strobing. I use whatever framerate I can run stable at. Most of the time, somerwhere between 75-100 fps. However, some games only support fps-limiters of 60 and 120 fps In fortnite, you can use epic TSR. It has a 200% frame buffer and it's not blurry like all those bad TAA implementations in motion, especially with 100% input resolution. You can also use 4x DSR (0% smoothness) + DLSS performance to get 100% input with 200% output. Default TAA is much lighter than advanced upscaling and can use a 200% frame buffer as well, which may bring it to 2 ms on a 1060 when upscaling to 4k. It's not potato cheap, but worth using if you ask me I guess my playstyle is a bit more demanding than the 'sweet' 60 fps. I need a framerate above the visible flicker threshold, about 85 fps, without any framedrops. 60 hz strobing is flicker free for me with a much reduced brightness (about 30 nits), but I don't like it so I only use it rarely when I can't do better but still want to play 120 fps would be perfect for immersive types of games, while 240 fps is good for fast paced games. That does not mean 1000 fps isn't any better, in fact it's a lot better, but it's probably not worth sacrificing features for. There are other techniques to create a 1000 fps experience too. With an eye tracking device, it's possible to use eye movement compensated motion blur. Non-deformed frame buffer resampling based on eye motion is another possibility. It could replace strobing and framegen in the future

    • @bazhenovc754
      @bazhenovc754 10 месяцев назад

      @@normaalewoon6740 With DX12/Vulkan you have explicit control over the CPU buffering and if you implement that correctly it's no longer an issue, you absolutely can do it without both microstutter and tearing at the same time.