Except that DLSS is "guessing" how stuff should appear and generating it based on the guess. It's all just AI giving you what it thinks you want, not what the true render would have looked like.
I mean, a blurry license plate where you can't make out the characters is still going to be sloppy guesswork by DLSS. It would have to know exactly how letters/numbers look in various colors, fonts, and against various backgrounds to possibly decipher the limited information.
I wonder if AI generated evidence would fly in the court room? I doubt it, as the AI is basically making shit up. It might be very plausible shit it is making up, but it still is only something the AI's training data assumed might be there, not an actual objective observation of any kind.
They are already making this. As an example, Remnant 2 devs openly admitted that they made the game with upscaling in mind and that it's not really meant to be played natively.
I mean, what's the issue? Can you actually try to have a higher level of thinking in that brain of yours? You do realize graphics cards have basically been using tricks this entire time, and then games become better looking and more intensive, something that was otherwise not possible.
Ray reconstruction doesn't actually help with performance much at all. So this tired joke you keep repeating over and over doesn't actually apply. Ray Reconstruction is for image quality, not framerate.
@@Irrelavant__ sounds like copium bro. we have seen time and time again even when hardware specs go up, AAA studios just dont optimize the game. they are like well now everyone has better hardware sp why optimize, at least at launch every game is a buggy unoptimized hell, most games do not need ridiculous high specs but it becomes a requirement because of zero care for optimization
DLSS is a double-edged sword. You see it as a cool technology and frankly, it is but we have a fairly high chance of future games relying heavily on DLSS to have good framerate.
Already happening. You can't run Remnant 2 at playable framerates on modern hardware without using upscaling tech, and the game doesn't even look that great. And Starfield is just a straight up embarrassment, but no shortage of people defending that optimization dumpster fire.
exactly what i was thinking.. but then again hopefully competition and the free market will force quality to triumph, also i was thinking this might actually be the way to "super realistic" graphic in games, you train the data on real life images and use it to replace the in game graphics and tada: real life graphics quality in game.
Starfield was my first real run-in with this issue. Running it without FSR or DLSS is a fool’s errand and I kind of hate it. I initially typed this next part elsewhere, but I’ll share it here as well in the hopes that someone who knows more than me can see it and correct me: >This is really cool stuff, I’ve been using the older versions of DLSS for awhile now, but is it a stopgap (for gaming)? Is traditional rendering dead? Will games soon become too ‘high fidelity’ for cards to keep up? Idk, it’ll be interesting to watch this space over the next 5-7 years, but I would choose traditional rendering every time if my GPU could handle it. >Specifically, DLSS-G (Frame Generation) just kind of seems like a cop-out to me. The cynic in me says it’s cheaper to make a card designed around Frame Gen than it is to make an actual ‘generational leap in power’ GPU, so that’s why Nvidia is doing this.
You're right about companies relying on DLSS to have good framerate, but I don't think that's necessarily a bad thing. I mean we'll get really good graphics with really good frame rate, so what's the problem?
@@wing0zerolol is this really the best counter? "You don't know what that means"? It's blatantly obvious that companies are going to crutch on this for FPS and push out unoptimized messes, where they lazily put processes on the CPU that take time to code properly for the GPU. I don't need to speculate that will happen when it already is happening. Vex has 2 videos going into explicit detail on the CPU issue.
The only issue with DLSS being mainstream is developers won't push to make their games optimized they'll just tell you to use DLSS. Games should be optimized and have great DLSS support.
Some of the unmentioned negatives for anyone curious: Input Latency and Motion Clarity. Framegen can't make the input latency better than it's native pre-gen latency with reflex enabled. All temporal techniques (TAA,TSR,FSR,DLSS,DLAA) introduce some amount of temporal 'blur' to an image compared to a non-temporal solution (MSAA, SMAA, SSAA), but the gap is slowly closing. Not quite ready for competitive shooters like CS and Valorant where clarity and latency is king, but it's getting better. RTXDI is another interesting tech if you're curious about "What's next". Basically Nvidia's Path Tracing tech. Some real "Oh, they're doing magic now" tech.
@@jaleshere Could work pretty well for turn based gameplay. However realtime gameplay dependent on reflexes still feels sluggish, despite the fluent graphics. I don't like where this is going - looks good in gameplay trailers, feels shitty at home when actually playing...
@@Charles_Bro-son Correct, it's a fake 60 FPS. So it will still feel like 20FPS input-latency wise. Useful for flight simulators and turn based stuff, horrible for first person shooters.
the tech to upscale things like this could also be used for ingame textures, that way no matter how close you get, the textures will be actively upscaled or downscaled depending on the relative camera position, similar to UE5's Nanite.
Although I like the idea of upscalers used as a way for older cards to get more performance, since nvidia limits it to new cards it's only a matter of time before they start using this "performance" as a selling point.
God Bless FSR 3. The fact that it works on all graphics cards, no matter the age or brand is insane. AMD truly playing the side of gamers since NVIDIA got so complacent and uncaring after making AI money. Addition: For the NVIDIA dickriders, this is not hate for Nvidia. This is competition loving. I don't hate Nvidia, I have used their graphics cards all my life and have only very recently switched to AMD. They are both very good but the only loss I suffered in the switch was Nvidia Broadcast. Other than that, both of them are functionally the same. However, being able to use FSR tech through the AMD Control Panel is incredible. You can have FSR in every single game even if it's not baked into the game. That's incredible. AMD is surging through accessibility for all gamers and I seriously hope because of this, Nvidia figures out they need to do the same otherwise they'll continue to lose good will. Their GPUs have already caught up, their CPUs have caught up to Intel, and their prices are normally cheaper across the board. They're doing super well and all the hate for Nvidia because they're simply not Nvidia or Intel is depressing. These are companies that don't care if you shill out for them, what they do care about is sales and numbers, and currently, AMD is sacrificing profit for goodwill to get ahead of the competition and it's working. Later on they'll maybe pivot, but right now, take advantage.
I remember learning about this in game dev school few years ago and thinking to myself: "Oh I can't wait for it to be used in games!" Good times coming!
@@ramonandrajo6348dlss isn't a gimmick which this video is on about....and if your on about ratyracing it also isn't a gimmick lmfaoo, it's hard to run yeah but it's a major game changer to graphics
@@ashleybarnes9343 It is a gimmick, as well as being a crutch for extremely Unoptimized resource utilization that is non-functional without it. If a product needs to be de-scaled and then checkerboard the screen up to “full res” to run in the “recommended” specs, your game sucks regardless to FX used.
When they say DLSS 3.5 for all cards it’s confusing. Only 40 series + cards will have frame generation thus have vastly different fps if that makes sense.
@@Wobbothe3rd Working is one thing, working as intended is another. They're trickling down features to older series just enough to let you crave the real deal (which is more feasibly achievable on 4070 and above), to prompt you to give in and upgrade. Genius marketing with no backbone, typical of late-stage capitalism. =)
I hope the rumours that Switch 2 will use this tech are true. For me it's the perfect use case, because you can't make hardware super powerful if you also want it to fit in a handheld. And this would compensate for that. Yes, it has some drawbacks, but it would negate most of the problems current Switch faces.
Doesnt make any sense at all. Nintendo uses no cutting edge hardware or software and the only hardware using frame gen and advanced hardware accelerated upscalers are Nvidia gpu's which will never be in a small form. Mobile = APU/integrated with no dedicated gpu. Also is expensive to produce wihich Nintendo never deals with.
@@Ay-xq7mj iirc the rumored chip Nintendo will be using isn't exactly the latest or cutting edge of Nvidia in terms of hardware. it will be on ampere which at this point is a couple years old and will be replaced by Nvidias chiplet design in the 50 series ~2025. So Nintendo is quite literally gonna release the Switch 2 on hardware that Nvidia has matured at this point. Switch 2 WILL have DLSS and Frame gen available to it.
I have been following "2 Minute Papers" (I just can't remember this doctor's actual name) for years now, he's a genius. This "illusion" of creating more frames by having the computer predict where and how pixels are going to move in the next frame in order to place a blend in between has been used for years now in video production with the purpose of creating fake super slow motion videos, as in GoPro Hero promo videos, which feature scenes that were just not possible to film with their cameras back in the day. The name of this feature was known by the commercial name "Twixtor", and it was very popular back then. However, it took hours to render just a couple of minutes of video with this effect, what is actually new here is that it's working at real time now, which is absolutely mind blowing. "What a time to be alive!"
I couldn’t finish the video. There are millions of non-native English speakers out there but I have never seen someone use so many commas while speaking. He has the vocabulary and the pronunciation but wtf
"This makes the barrier for PC gaming that much lower". Yeah, you only need a 40 series GPU for frame generation, they're practically giving it away! /s
The problem is that barrier of entry won't last long. For now this technology makes new games accessible to older hardware, but games going forward are going to be developed with this technology in mind, so the requirements will go up accordingly.
Man could you imagine if this kinda tech dropped on the next nintendo handheld or really any other portable system. The ability to boost performance using optical flow techniques could be the key to keep those types of devices small and lightweight while competing with home consoles and PC.
Some potential leaks about the next Nintendo hardware (Switch 2, or Switch Next-Generation, or wtv it's gonna be called) mentioned the fact it is probably gonna use DLSS/FSR3 along with better hardware to make games such as Final Fantasy 7 Remake or the Matrix Demo be able to run on the handheld console as if it was run on a like PS4 pro
without RT and DLSS, we would all be playing 4K right now with a 300 dollar card if they focus on rasterisation instead of waisting 50% of the pcb on creating an artificial battletech with AMD and intel, to force customers into buying their product with totally non mature technology. gives 4K to everyone first, and 8K to rich people, then work on reflection. Who care to have 90fps written on fps indicator when in reality there is only 15fps running with all the latency/ghosting/noise/color shifting, blurryness/frame distortion/cutting problem it involves. Give us 60 fps 4K with decent price, and improve shadows, water, texturesn then create real RT. Because right now it's kind of shame in 2023 to see a 2000 dollars card unable to stable 60 fps @1080p with max settings on some 2020 games such as cyberpunk or flight sim without using artificial frames and resolution through DLSS.... If to continue with that tendency, you rather watch a screenshot slideshow rather than playing videogames.... It's totally stupid because RT impact on games is more important than switching from 1080p to 4K. It's as demanding as playing in 5K. And we could all be gaming at 4K without these tech, and honesty I don't see anyone in the world trading 4K for RT if they could see the diff between 1080p and 4K, they would not give a damn about RT.
thats a pretty good point, you should really up your resolution to at least 4k before you worry about RT, but at 4k there really isn't anything that can run RT.
The plot twist is that the technology for 3.5 is advertised strongly on the comparison videos with frame generation on while also pushing strongly that DLSS 3.5 is available without frame generation for all RTX cards. A tad of manipulation there. The real difference will be 5-15% at best depending on the game. At least it will look better with more or less the same performance.
I think a good way of looking at it is the comparison between brute forcing computation, vs becoming increasingly better at predictive visual generation. Imagine if we used our brain as an example, vs the brain of a whale. It can weigh over 10 kgs, vastly outscaling ours in terms of sheer size and also in neuron connections. But the reason our brain is different, is due to how different parts interact, how some neural links are formed, how our process of learning over time to rewire our brains to constantly become better at things we set out to improve. That is what truly sets us apart and allows us to go from the intellectual level of a wallnut as infants, to gigabrain 5heads (on occasion) as adults. Neural deep learning AI does the same for computational graphics, it entirely reworks where the effort is put, it forms 10 99% positive connections before a brute force method has even calculated a single 100% step. And with time, AI will only get closer and closer to 100% for each calculation, to the point where we can't actively tell apart the differences - which is pretty much where we're at right now. So from this point, the scale is only going to further extend in favor of AI accelerated predictive generation, rather than brute force calculation. Just like how we read words in a book, we don't need to read every letter one-by-one to figure out what the entire word means. We can use predictive deduction to skim the possibilities so fast that we are able to make out even complex words practically in an instant, which allows us to read books in real time. It sounds strange, but when you think about how many possibilities we are filtering in milliseconds to arrive at the right conclusions in real time, it's crazy. AI is literally that. And yes once in a while (just like us) it gets a prediction wrong, and there can be brute-force reliant methods to fact check these more high-risk generative predictions to ensure that false positives get reworked, and this process can be done in a millisecond too because the actual hardware isn't working overtime on brute forcing every frame on its own. Frame Gen may seem like a gimmick now, but in cohesion with DLSS and ray reconstruction, I think within 2 years a most this will be the standard that takes over from brute force hardware calculation when we look at general performance levels.
You raise some valid points. It's true, the tech we've been given here is next-gen. The fact that Nvidia did it first is a logical consequence of their many advantages over AMD. They have the brand recognition, the market share, and, most importantly, the resources. This is the turbo-capitalist reality (all else be damned), and Nvidia hasn't shied away from banking big on the recent AI hype either. AMD has only ever retaliated retroactively; playing catch-up while setting competitive pricing has always been their signature (case in point, they failed to rival Nvidia's flagship; RX-7900XTX is as far as they could go). FSR is also an attempt to mimic DLSS, and AMD lagged behind nearly 2 years with it. FSR 3 got released yesterday and it is hardly on par with DLSS 2, let alone 3 (to say nothing of 3.5). In short, a disaster, but a promising one if we're to have some faith in their Fine-Wine technology. I appreciate the idea of DLSS itself in a limited set of scenarios, but I will not give credits to Nvidia for implementing it first. This is simply the way the GPU market conditions have aligned for them to end up as the DLSS pioneer. Historical circumstances and the perks of monopoly. These are just farcical levels of RNG at play. If AMD released FSR before DLSS (and made it just as good), things would've played out differently. But that's one too many ifs and what's done is done. Right now I'm more concerned about a future shortage of GPU manufacturers for the masses, because recent trends show that AI and DataCenter are far more profitable moving forward. I fear the day AMD stops developing GPUs because it will simply no longer be profitable, which would give Nvidia complete chokehold over the GPU market (ceteris paribus). And I also dread the idea of using DLSS as a way to cut corners on proper game resource optimization because Nvidia can get away with an upscaling gimmick that tricks our monkey brains. (You can find arguments in favor of it, but I remain forever convinced that DLSS could never hold a candle to true frames rendered at native res with real computing power.) This creates a precedence that could soon become the new standard for gamedevs. Just as Apple did with the notch; it was just a feature for the iPhone, nothing groundbreaking in particular, but it became the new norm because of Apple's power to set trends and the masses' willingness (or obliviousness) to take it for granted. And now here we are: most phones have notches. I hope the same rhetoric won't apply to frame rendering in the future. So, as it stands, this endless arms race between red and green only affects the average PC building enthusiast due to practices employed by both sides, with AMD over-promising and under-delivering, and Nvidia adopting bandaid, Apple iPhone tier tactics and planned obsolescence (crippling the performance of older cards with newer drivers to boost sales of latest gen cards). I'm a GPU centrist, I've owned both Nvidia and AMD GPUs on and off, but I think neither team deserves to be spared from due criticism during this hell of a period for R&D. My heart and wallet say AMD, but my brain and foresight say Nvidia. And that's a really disheartening prospect. I am trying to root for the underdog, but right now it feels like an inadequate time for it. Advancements in AI computing are rattling the world as a whole (let alone the gaming world), so I find it hard to imagine AMD even catching up with Nvidia in the AI division of their R&D for upcoming GPUs. At the same time, I cannot endorse Nvidia's greedy marketing practices. Part of me wishes that the AI hype would just wane already, but that's no longer an option. We've already unlocked Pandora's box. So, it makes complete sense that whichever GPU manufacturer outshines the other in the AI segment is sure to have a significant headstart going into this territory. And right now that manufacturer is undoubtedly Nvidia. I pray for AMD to get their shit together, and fast. Alternatively, we could also hold on for a divine intervention from a third competitor under the name of Intel, but I have no illusions. Their focus seems to be on iGPUs now and for the foreseeable future.
@@GryffinTV I pretty much agree on most if not all the points you raise, and have thought similarly about AMD and their path. There has however been recent aspects of their business practices that have helped shed light on some potentially conflicting elements - for example the idea of AMD being the runner-up and therefore not being the turbo capitalist entity to push ahead compared to Nvidia. From observing AMD's behavior over the past 3-4 years, it has become evident that AMD in fact actively choses to not bear the torch, even when they have the opportunity to do so. They could take the lead, but don't. In fact, it has led me to believe AMD (at least with their current plans of operation and business) are not actively interested in stepping out of Nvidia's shadow. I think for the time being, they're more comfortable staying as the perceived underdog, the competitor that plays catchup, the one fighting "against the odds" so to speak. The reason I believe this, is because AMD has shown a very clear bias towards mimicry, even when they could easily take the lead in sales or general market share, and I think one of the key reasons why they're sticking to the shadow, is because the risk of being the forerunner is also that you end up tanking the hardest hits if things do not go to plan. AMD has consistently struggled with internal planning, which does make sense because they're a smaller scale than Nvidia and they are more limited in resources, thus limited in options and leeway. If things go wrong for them, they fall harder, with fewer safety nets. So there is lower inherent risk in actively choosing to play the role that catches up and mirrors what Nvidia does, because it lets Nvidia pave the path and pay for that pavement upfront, which in turn lets AMD coast more easily and even if things go wrong, they will be safe in the fact that Nvidia by nature has gone down the same path so at worst they're both wrong, and in a duopoly market with no real alternatives, that is a very safe place to be. But if AMD tried to lead the path and Nvidia decided not to follow, in that case if AMD's path goes wrong then they're fucked. Now Intel is starting to throw cogs in the wheel, but it's a slow process and it gives time for AMD to figure out how to position themselves in the next 10 years - and I think that may also be why we're currently seeing small shifts in market segments and what type of GPUs they focus on, as well as how they are redistributing their balance between consumer and enterprise markets. More than likely AMD is actively choosing to stay in Nvidia's shadow for now, while they test different waters to see potentially viable future paths to branch off to post 2030, and see what type of groundwork would have to be laid out to enable those paths. And while doing so, staying in Nvidia's shadow is simply more consistent and less risk. Two things a company in process of assessing change loves to be in.
Unfortunately we live in a reality where this will be a double edged sword for consumers. Devs will begin using this tech as a crutch for optimizing their games and we'll be right at square one. The games will be so poorly done that enabling DLSS will feel like just playing the game normally and then god forbid you don't use the feature or else you'll be running the game as a slide show. Now if the game studio has some integrity and puts in 100% effort into polishing their game regardless of DLSS then we'll get god tier games to play but with how big corporate greed is in terms of cutting costs, I have my doubts unfortunately.
Keep in mind, though there is support for DLSS 3.5 on Nvidia cards since 2018 (Rtx2080ti), they will not have the full capabilities unless you have a 40 series.
to be clear DLSS 2 (SUPER SAMPLING/UPSCALER) is used on DLSS version 3.5 (nvngx_dlss.dll) which is available on all RTX cards DLSS FG (FRAME GENERATION/DLSS 3) is also used DLSS version 3.5 (nvngx_dlssg.dll) only available on RTX 40 cards DLSS RR (RAY RECONSTRUCTION) which is DLSS 3.5 (not sure if it will have a separate file or not) that's available on all RTX cards yep only make you more confused so basically RTX 40 Series: supports all things offered by DLSS RTX 20/30 Series: the same except Frame Generation
Standard ray tracing isn't remotely worth the performance hit, and path tracing looks incredible but is prohibitively expensive. Outside of high end consumers who can afford 4090s, we're still waiting on the hardware for proper ray tracing in video games.
The sad reality is that it doesn't change much for the consumer because this just means most devs will get lazier. A lot of games optimize towards 60 fps and if they can reach that point with AI they will have increasingly high quality graphics but with optimization so bad it requires high-end hardware anyways to run. Ideally, practices don't degrade over time but I am not particularly optimistic seeing what games like Remnant or Starfield did.
It adds latency if you use frame generation. But normal DLSS with reflex is pretty good. Some weird smoothing occurs with movement but it's getting better just not quite there.
Since DLSS works so well for nvidea the newer 4000 series cards already have worse component on them because they expect you to use DLSS. Im all for DLSS but nvidea is just to profit hungry it is sad.
this guy's channel is amazing if you are pasionate about these kinda of things. I've been following him for a while and his videos are top notch, great explanations EDIT: they just dropped update 2.0 for Cyberpunk, 48GB so it's a great time to start a fresh new save (as they recommend)
The only bad thing about this is that devs have been relying on this for games to run instead of optimizing them properly. Either that or graphics card manufacturers actually tell them not to optimize for older cards so we have to make the upgrade. My 1070 looked pretty decent when it came out running most games I wanted to play at max settings 1080p and keeping stable 60fps if not higher (I have a 170hz refresh monitor). Newer games don't look like they're that much better in terms of graphics but my 1070 just struggles to get 30fps on low setting at 1080p and that shouldn't be happening. That's just bad optimization from the devs that are using DLSS or Fidelity FX as a crutch, as games have not made the graphical fidelity leap these hardware requierements are suggesting.
The issue is, if it's using two frames to generate an in-between frame, there'll be input lag because it has to wait for the third frame to be generated naturally before it can create the 2nd frame to display. But if it's using AI to "predict" the next frame based on the already displayed frames, then there will be errors, especially when things on the screen are changing a lot. The AI can't predict that you'll decide to turn around or fire a big gun or whatever at that moment. Hell, it almost certainly won't be able to dig into the game mechanics to accurately predict even what the enemies and NPCs will be doing next.
Excited for ray reconstruction. Frame generation, on the other hand, adds input latency and is only available on 4000 series which should be powerful enough to run everything at 80+ fps without it anyway
I just ran the benchmark for CP2077 2.0 patch with high settings and path tracing, and without frame gen, it would be ~50 fps. You're right for most games though.
@@wholetyouinhere when games start being made with DLSS in mind and don't optimize because "lol we have DLSS" then yeah it basically becomes "adding input latency"
alot of people doubt DLSS and think its bullshit but its as real as it gets u basicly sacrifice a small bit of ur input latency and it introduces a little bit of blur but the technique improves so insanely fast that soon its gonna be almost impossible to notice these flaws i remember when DLSS 1 came out and u could rly feel it yet the improvement to performance was already astonishing well now u basicly only got the upside cause they minimized the downsides
There’s DLSS and DLSS-G. DLSS-G is Frame Gen. Right now, only the 4000-series RTX cards can make use of it. DLSS lets you run your games at a lower resolution scale and basically maintain the same visual fidelity. DLSS-G is the one that actually generates the frames. I am still skeptical of DLSS-G bc it feels like a cop out and a reason for Nvidia not to keep improving the ‘raw horsepower’ of GPUs.
at 13:17 he is so fucking wrong I looked it up and got this "Globally speaking, there are over 1.8 billion PC players, about 151 million PlayStation players, over 63 million Xbox gamers, and a little over 87 million Nintendo players.".
I spent little over $600 dollars to build my PC this year. The barrier for entry really isn’t that high. A big part of why people don’t get into it is because they perceive it as super expensive and also because they think PCs are complicated to build and don’t want to learn. Consoles are popular because people don’t wanna do anything themselves and because they think they have to drop 2 grand on a PC when they really don’t
its seems expensive because the high end parts are more expensive than ever, but its actually never been cheaper to buy a decent rig. low to mid range parts are more powerful than ever.
Frame generation is a nice to have thing, but its seems to only be worth using if you're getting less than 60 fps..it does increase the visual smoothness etc. but from my understanding it has no effect on input latency so the input would still feel like 25 fps if that's what you're able to run without frame gen. It certainly depends on the type of game you're playing and the input method used.. I can't imagine it would feel very nice to use with keyboard and mouse as that is more sensitive to input delay vs controller. Also, at this time frame generation requires RTX 4000 series cards meaning at least an RTX 4060. I think it will be way more interesting if AMD FSR 3.0 allows frame generation on all generation of cards as it seems more of a low end GPU use case thing.
the thing is that you can tweak the resolution and setting so that you start generating from 40-50 fps, and get 80-100. That's way better than playing on low settings in 60 or high in 30
@@Qrzychu92 this proves my point even better, who is more likely to turn down resolution? High end GPU owners or low end? And even DLSS upscaling from anything below 1080p looks pretty bad..it does retain more quality than FSR but ideally both tech should be used at least 1440p resolution. The higher the base resolution upscaling from the closer FSR matches DLSS in image quality, I think most people put too much stock in DLSS quality vs FSR because they both suck at low resolution lol.
@@Coffeeenjoyer31 "And even DLSS upscaling from anything below 1080p looks pretty bad" TODAY. Don't forget they are working on it. I can easily imagine that in next year or two, upscaling from 720p on high will look better than native 1440p. For today, you are right, just don't forget that there is tomorrow. If you told somebody during the RTX20 series premiere that there will be AI powered frame generation RECALCULATING THE RAY TRACING for the new frames, nobody would believe you, or say that it's going to be viable at all. But here we are now :)
@@Qrzychu92 that's a fair point, we will have to see how it progresses. However it will be a challenge surely because lower resolution is just less data to work with. I certainly wouldn't recommend a product to anyone based on future improvements.
Dlss 3.5 is a de-noiser that’s supported on all rtx cards for better looking Path Tracing(not ray tracing and doesn’t affect performance). Dlss 3 is frame generation only on 40xx cards. Makes sense right.
I hate DLSS for one reason. Nvidia is now selling you software, instead of hardware. You are not paying for a product, you are paying for a service. How much do you wanna bet, that this ends up screwing the customer? You really think they developed this for us to have smooth framerate? They developed this so they could charge us for the same performance, while not having to pay for the materials themselves. I just know this shit is gonna suck
The only problem with DLSS is that Nvidia make so you need to buy/re-buy their hardware to use these technologies, in comparison AMD is launching FSR 3 for not only their GPU's(older and newer ones) but for the whole GPU market.
Sadly nVidia seems to only put a little more Tensor cores into consumer cards than needed. The reason why a little more than needed is to get in front of AMD in performance. DLSS DLAA and etc. with their deep learning line of consumer technologies uses tensor cores not CUDA. So nVidia can and probably will do what you are saying and make more tensor compute intensive models that require more tensor cores to use in future cards to get consumers who want to use that feature to get a new card.
Exactly, im buying AMD card next time unless scammer nvidia supports my 3060 as well for dlss 3.0, not believing ''it wouldn't work well enough'' BS! If they can not support even last generation for new technologies ridiculously, there is no point buying 50 series etc as it might happen again...
@@ggoddkkiller1342it "works" yes but running without it runs better which is the sad thing. The value i see in the 50 series will be in the 5090 to upgrade from the 980 in 2025. The reason why the 50 series of nVidia instead of the 8000 series of AMD or the 800 series of Intel is 3D modeling program support. CUDA is supported by games too so that is why i use the same card for both. Gets in the way when the full GPU is needed for GPU compute in some applications but having a integrated GPU fixes that which i learned from not having one this time.
@@yumri4 Nah still not buying it, 4050 can handle dlss 3 but 3090 ti can not with many times more tensor etc cores?! Even my 3060 outperforms 4050 in every way possible...
@@ggoddkkiller1342 with a quick google i found that you are safe, the DLSS will support the GPUs from the 20 generations, but in my case, my old 1050 will only have the FSR 3
The problem is that this threatens the top GPU market because most people won't need to buy them anymore in order to play a game on Ultra with high FPS. I bet Nvidia already has amazing DLSS technology to make all this even better, but they won't release it because of their top GPU sales.
Nvidia are already locking their tech behind new GPU releases. They are not enabling these AI solutions on previous generation cards, so you don't have to worry about poor Nvidia losing sales. And developers have already started skipping optimization in favor of using upscaling tech, so you will need high end GPUs in order to achieve playable frame rates in the latest AAA releases.
Hardly. What we will instead see, is dev studios taking advantage of the higher ceiling in order to simply create better visuals for "high and ultra" settings. Like Cyberpunk's "Overdrive" mode for example which you literally need a 4080 or 4090 to run at playable rates even with dlss on 4k. And even then, "Overdrive" is still a compromise in many regards compared to full global illumination and caustics. It's not because we are in lack of dials to turn up. We have only turned them up by 5% at the current point, and 90% of dev work goes into figuring out how to make graphic compromises so the dials can be turned down without the visuals going to shit. High end gpu market will always have something catered to it. Trust me, that's gonna be the least of our problems (3D designer by profession, if game devs could ad 5% of what this industry utilizes, you wouldn't believe your eyes).
The cynic in me says Nvidia love this - they will design their future cards around Frame Gen. - in games where Frame Gen is not used, the FPS increase over the older cards will be less impressive. - where Frame Gen is used [needed?], the older cards won’t be able to keep up whatsoever. Less savvy users will just assume their “old GPU” is trash and will buy something in the latest generation’s line-up. Frame Gen will give the illusion of increased power/performance.
You really think Nvidia is sabotaging their market like this? No, they're gonna lock this tech to the new GPUs. And when the marketing drives off, they're figure out a different technology to sell. They're not that dumb. Besides, there's still people who want REAL frames like me.
Just marketing smoke... Games already have so much graphic options... It's embarrassing. And what do we have?? Another graphic option. The PC world is a mess. Meanwhile the hardware is failing left and right. Particularly GPU's! They're getting so hot now, pushing the 4K resolution at acceptable framerates, that is blowing the chips. Or blowing the connections in the chips... And actually, it's not just the chips it's also other components like the power connector debacle on Nvidia 4090's... So now GPU's instead of living 10-15 years... They barely reach the 5 year mark. Anddddd of course, more E-waste for landfills. Let's hope nobody sets their house or belongings on fire because that's also on the table, last but not least!!!! Have you seen the size of GPU's too?? They're literally BRICKS now, it's a JOKE! The coolers are so massive that you need to have some sort of support inside the case so the whole thing doesn't fall off and pulls motherboard components along with it. And this "4K race"...... There's something going on in the manufacturing process or the quality of materials that is just making garbage GPU's... So the whole industry... It's a snapshot of the actual GPU's lolol It's so full of bs that is crushing itself!!! This thing doesn't have a leadership with a clear objective and it's being run by lunatics. We don't need this bs.... What we need is BETTER games. There's barely any game that justifies having this kind of hardware lolol
@@blushingralseiuwu2222 What is the problem? He is not a native english speaker. Didn't you understand anything he said? The guy is incredible at Light transport research and always shows the newest papers in tech, you get that anywhere else on this platform.
Its true, I downloaded the DLSS 3.5 mod for starfield, and that shit literally doubled my FPS. I went from 50-60 frames in cities to like 110 frames on my 4080. That shit is fucking god tier
@@MGrey-qb5xz not sure what youre talking about by that because with the DLSS 3.5 mod, the textures and objects are sharper than the vanilla. It does the opposite of blurring.
This does less showing how impressive DLSS 3.5 is and more raising questions about why a 40 series card still only gets 20fps in Cyberpunk, the same fucking performance my 20 series card was getting.
DLSS 3 is going to be available only to the latest most expensive nvidia cards. AMDs FSR3 will be available to ANY video card using DX11/12, with some functionality limitations depending on the card capability, and it will be doing the exact same shit and more.
Problem is this is a way for Nvidia to sell software instead of hardware. Which will mean you will always have to buy the next generation of graphic cards every time to get access to this software, even though it could probably work on any GPU.
@webtiger1974 matter of time lol doesnt matter if its nvidia amd or intel if this shit gonna be in all games every company will implement something like this 100%
Wrong. This technology does not work on any hardware. Hence the tensor cores. AMD does not use em, slaps cheap extra ram on the card and people lap it up.
@@cirescythe AMD is normally accepted as the best general hardware company (so stuff that's for like work and not gaming). Their stuff doesn't have tensor cores because they don't need it, it's not the audience they cater to.
Heavily increases input lag. DLSS (Deep Learning Super Sampling) heavily increases input lag due to the waiting time of holding back both frames to generate the AI frame.
No it doesn't. DLSS trades various performance metrics - sharpness, clarity, motion stability, resolution - for other metrics. The grift works because we typically don't measure the metrics that are sacrificed. None of this is to say that DLSS is inherently bad, but it's definitely not "free" performance, and it's definitely not legitimate optimization.@@RlVlN
Not sure if it’s the same as this but I’ve already downloaded a few apps that use AI to un-blur some not so good photos I’ve taken on a trip. Couldn’t beleive how well it works!
Problem we started seeing nowadays after DLSS became mainstream is that devs design games to only reach playable framerates WITH dlss or FSR enabled, which is very counterproductive. It will actually end up raising the barrier to entry instead!
All part of the plan. When devs were using traditional native rasterization, games scaled very well with older GPUs. This was obviously cutting into Nvidia's sales, so they needed to create a business model that forced customers to upgrade - and Nvidia needed to do this without relying on development of more expensive (for them) hardware with legitimate generational performance uplifts. The solution has been AI software, which has become Nvidia's entire focus because it is way more profitable than delivering better hardware. Now devs are programming with these AI software solutions in mind, and Nvidia is locking that software down to new overpriced GPUs that feature only minimal performance improvements (keeping Nvidia's own costs down). The end result is going to be marginally better looking games at the cost of optimization and an upgrade cadence that will price out the vast majority of consumers. Eventually, people will be forced onto cloud gaming. They will own nothing and be happy.
anyone below 40 series cards dont get frame generation.... thats what ur seeing 15 - 80 fps jumps. Again if you have a 20series or 30series cards upscalling is only really usefull on a 1440p monitor as upscalling will just make ur game look like mush at 1080p. just an fyi i think asmong got this all wrong.
I love dlss 3, it's working quite well for some games, most of them single player but I even use it for darktide 40k and enjoy 122fps on average and I can play without tearing. Game looks amazing and the technology was the deal breaker to go for an overpriced 4070ti. I had a 6800xt and problems with VR like crazy. The Nvidia works well with my quest 2 and link cable (much better than the amd). What Nvidia is lacking is their quality and GUI and function of their drivers. AMDs technology is good, but amd lacks some niche things (Like VR or at least the few games I am playing weren't working well😂) So far so good. Hope everybody's card is working well, regardless of AMD or Nvidia or Intel. The best should be that we all can enjoy those marvelous technologys
Personally i dont like DLsS 3 in games with ton of movement because it produces ghosting which means several semi-translucent backends of object like car following the actual object or car when in motion
AMD cards have never worked right with VR headsets like the Quest2 that utilize transcoding. They just have bad video encoders and worse driver support for them.
I ended up getting a 3070FE for Darktide and it still ran like total trash ran better months later but no by much. Its a gimmick same with ray tracying.
@@ramonandrajo6348 AI based upscaling is a gimmick? That why Intel and AMD are rushing their own version out the door? It's the future of computer graphics whether you're onboard or not.
100% Nvidia makes this a monthly subscription. Why would they release something that would make their GPU sales go down dramatically over a 10 year period?
GPU competition used to be about raw performance. Look what it has become right now. The only thing they need to do is just shut their mouth and optimize the game carefully before shipping it. And we simply don't need some "tape" to make it just playable.
DLSS is basically the version of how video encoding works but video game style. Meaning the hardware doesn't need to render everything in between frames but the software can just fill it in. Like in how videos, the entire scene does not need to be rendered in but only the moving parts.
I mean kinda.. I am not sure I would use video encoding as a comparison to DLSS. I get what you were trying to say but it's not terribly accurate to compare the two at all. Especially since DLSS is ultimately not software based. It is using the AI cores on the GPU to do the compute. It is still very much a hardware based process and still very much rendering at the end of the day. It is just taking an alternate rendering path. Rendering is just literally the act of creating an image.
@@JathraDH It's not a hardware thing, its a software thing that is hardware accelerated. And video encoding is also commonly hardware accelerated. Ever heard of NVENC?
@@bethanybellwartsEverything is ultimately a software thing that is hardware accelerated which is why we call it "hardware" and not "software". When we call something "software" it means the CPU alone is doing the processing. If we could do AI efficiently on a CPU alone we wouldn't need GPU's to run AI in real time now would we? Encoding however is the act of taking 100% complete data and figuring out which parts of it can be removed while keeping the end quality at an acceptable level. It is a subtractive process (outside lossless formats). The replay of encoded video really takes no work because the work is done up front on the 100% complete data set. AI is starting with an incomplete data set and trying to create more data out of nothing. It is an additive process and the work is done in real time which is why it is so intensive to do. AI and encoding are actually diametrically opposed opposites of each other, which is why I feel comparing them isn't really proper. They are fundamentally not remotely the same type of process at all.
Similar but with some key differences: compression means you can analyse the next frame to get the motion, DLSS predictive frames is much more complicated because of the real time data in input and the fact that the AI pipeline generates the vector based on the knowledge it has of the probable evolutions. 3.5 is just even more over the top as the training data takes into account lightning specifics, as if it understood the image and the physics behind. Some artifacts can be generated because in a game you don't know what pixels will move next, so the more FPS at the start, the better the result.
There is ONE BIG ISSUE with that all technological progress. And that issue is greed. Games are not made for us to have fun. They are made for us to upgrade our PC to have fun. Devs dont give a **** abou game optimalisation. They even admit it (starfield). Now they are even more agressive tools like dlss 3. Next year games will be bearly playable (~30-40fps) only with dlss 3, but will offer 6-8 years old graphic. But fear not we will get new 5000 series with dlss 4 wich allow us to play again in 60+ fps. And then repeat. All that new technology is great and we as gamer hope games will be made in first place to work good without it, and then that technology will be added to further enhance our in game expiriance. Is it? Or maybe I am right and only good use of for us is that, devs use it to make games somewhat playble years after release.
If you're confused why the framerate jump is so massive, it's because they're getting those numbers (15-85fps or whatever) by using frame generation AND by using upscaling. So upscaling renders at a lower resolution, upscales it to make it look pretty, which probably gets your framerate up to liek 30-40, then frame generation doubles that
i have a 3070 and there was a lot of hype around DLSS a few years ago so i expected greatness but honestly it doesn't do shit for me except lower visual fidelity and maybe MAYBE give 1-3 fps if i'm lucky. so color me skeptical. maybe 3.5 is good but, doubt
@phoenixfire8226 no other definitely xcan, depends on the title, settings, resolution etc but see testing where that card is clearly limited by that cpu under certain conditions
There is an argument for not using frame generation. Namely input latency. But the rest of the tech can feel a lot like a free win. I cannot see any difference in picture quality turning on DLSS 'performance' in any game I've tried that has it. I honesty think it makes Cyberpunk look better. (I suspect it's better at AA than the traditional AA as implemented on that game.) So that extra framerate essentially has no trade off. -- Note: It's real framerate when just upscaling with no frame generation.
Next time I play chess/Civilization - I'll be sure to turn DLSS on... ...for anything else - nah, I'm good. If the game doesn't perform without this fakeness - I rather refund.
@@tripnils7535 Casuals might not... ...but you really think it is Casuals who are forking over nearly $2k for a gfx card? It's super-fans that are over-extending to get on the 4k hype. 10MS is enough to get you killed every time in competitive gaming. So - this isn't for gamers... it's basically for Console players. A way to keep consoles relevant. :D Anyways - I'm staying out of it. I'm happy to switch to Intel come next gen. Anything to get away from NVidia at this point.
Frame Generation must simultaneously generate smaller frame and than be affected by DLSS super resolution to upscale it. So if we look at the picture 2:59 the DLSS FRAME GENERATION must be as small as TRADITIONAL RENDER and start generated simultaneously while we upscaling TRADITIONAL RENDER fame, and then FRAME GENERATION have to be upscaled with DLSS SUPER RESOLUTION while next TRADITIONAL frame rendering. I believe it will make the process even faster than now.
I think it's a little bit awkward to group DLSS 2, 3, and 3.5 under the "DLSS" name when they all do COMPLETELY different things and serve very different purposes. Even so, still I'm excited to where this is heading
Does DLSS 2 use deep learning super sampling? If so, what does 3 do? What about 3.5? Since they're completely different. "I think it's a little bit awkward to group this AI with this other AI, they are COMPLETELY different applications of AI."
@@rewardilicious DLSS 2 (nvngx_dlss.dll) = Super Sampling DLSS 3 (nvngx_dlssg.dll) = Frame Generation DLSS 3.5 (unknown) = Ray Reconstruction all of them (their .dll files) are under version 3.5
No to dumb it down. Rendering is created from raw power of gpu&cpu . Generation is an AI program that uses your frame rate an makes extra frames in-between to make a better frame rate. Look at rendering vs generation frame by frame and you'll at least see the difference if you don't understand it from my explanation.
Nah, rendering is mathematically figuring out what it should be. Generating is making up something close enough to reality based on the information you have.
Lets say you had to draw a regular polygon. Generating will be drawing a polygon by hand and making it look close enough, rendering will by doing the same but calculating the angles before hand and then drawing with a protractor to make sure it follows the calculated angle.
In the future developers will be able to make a game out of a literal PowerPoint presentation and technology like this will turn it into a living vibrant game.
The intention: Push games PAST their limit to bring games not possible through native rendering. The reality: Devs spend less time polishing their games, using DLSS to hit 60fps
I mean the numbers are good but if the latency counteracts the extra frames you produce then things like FPSs are a no go. Single players etc it can definitely be useful though
I agree. I've been playing around with it all morning, and it actually encourages me to use frame generation now. There's so much more clarity and it gives me a reason to want to use path tracing combined with fg now.
hear me out - the graphics cards are powerful enough to run the game, they're just improving the ray tracing constantly by trying not to require so much of resources from the card. That being said, the game runs 80fps with no dlss by default but marketers nerfed the non-dlss option so you would be amazed how smooth the game runs when you turn it on and you don't have to deal with the 15fps garbage no more. Anyone :D ?
The problem with the barrier to entry reasoning is that high-spec games tend to scale their requirements to whatever the high-end hardware can do at the time.
Asmongold wisdomed the shit out of one of the main issues with DLSS and techniques like it (it's pretty much Interlacing all over again). It's good for looks, but it only fills in frames, doesn't and can't know about inputs. So all you're getting is the appearance of smoothness with even less input response. In high mobility games, these techniques will make the games worse, far far worse. And you have to look at it like this. If your GPU has time to fill in up to 50-100% of frames, it means the engine is leaving all that performance now used to fill in those frames unused generating real frames. DLSS and similar techniques are pretty much a hack by GPU vendords to bypass the fact developers deploy shitty unoptimized shite.
Ray reconstruction has nothing to do with frame generation. Next time watch the video before commenting. You're totally wrong and misinformed about how dlss3 works.
Your comment confuses the heck out of me. The guy in the video said that using DLSS he was able to render his object in 3ms as opposed to 40-60s. How does that equate to the engine leaving performance unused? Are you saying that 3ms spent on generating the between frame is time the GPU could have spent generating a real frame?
@@Webberjo Any processing power being dedicated to “generate” transition frames (which is essentially what frame reconstruction is - like in cell animation, each frame between each “key frame” to denote motion is less detailed but contains more information) is being taken away from rasterization processes, what they’re saying is correct - everything being used for DLSS (which is both a GPU and CPU bottleneck btw) takes performance away from the actual rendering engine of the game that upscaling works on top of.
DLSS is basically what those old CSI tech guys used when their boss said "ENHANCE! ENHANCE!" to them. Finally Hollywood CSI tech at out fingertips!
High resolution x100 zoom 😎
Except that DLSS is "guessing" how stuff should appear and generating it based on the guess. It's all just AI giving you what it thinks you want, not what the true render would have looked like.
I mean, a blurry license plate where you can't make out the characters is still going to be sloppy guesswork by DLSS. It would have to know exactly how letters/numbers look in various colors, fonts, and against various backgrounds to possibly decipher the limited information.
I wonder if AI generated evidence would fly in the court room?
I doubt it, as the AI is basically making shit up. It might be very plausible shit it is making up, but it still is only something the AI's training data assumed might be there, not an actual objective observation of any kind.
HUHUHUHUHUHUHUH EPIC REFERENCE BRO
Can't wait for Devs making Games only playable at 60fps if DLSS 3.5 is enabled.
we already there
They are already making this. As an example, Remnant 2 devs openly admitted that they made the game with upscaling in mind and that it's not really meant to be played natively.
I mean, what's the issue? Can you actually try to have a higher level of thinking in that brain of yours? You do realize graphics cards have basically been using tricks this entire time, and then games become better looking and more intensive, something that was otherwise not possible.
Like what you want them to not take advantage of the boost and then continue making games the same as it improves so you get like 2,000 FPS?
Ray reconstruction doesn't actually help with performance much at all. So this tired joke you keep repeating over and over doesn't actually apply. Ray Reconstruction is for image quality, not framerate.
cool technology, surely devs will not use it as an excuse to release unfinished unoptimized games.
Clueless Oh for sure it's optional, devs know not everyone has a Nvidia card
You could make that argument about literally any technology, including 3d acceleration itself.
The game is optimized, you just have to upgrade your pc
@@Irrelavant__ sounds like copium bro. we have seen time and time again even when hardware specs go up, AAA studios just dont optimize the game. they are like well now everyone has better hardware sp why optimize, at least at launch every game is a buggy unoptimized hell, most games do not need ridiculous high specs but it becomes a requirement because of zero care for optimization
@@Wobbothe3rd If you don't program in assembler you're just too lazy to truly optimize your game Kappa
I like when Asmon says he's going to play a game but never plays it and it's cool because the editors make some really good jokes with these lines
especially at the last 5 seconds of the vid.
same with xqc lol
DLSS is a double-edged sword. You see it as a cool technology and frankly, it is but we have a fairly high chance of future games relying heavily on DLSS to have good framerate.
Already happening. You can't run Remnant 2 at playable framerates on modern hardware without using upscaling tech, and the game doesn't even look that great. And Starfield is just a straight up embarrassment, but no shortage of people defending that optimization dumpster fire.
exactly what i was thinking.. but then again hopefully competition and the free market will force quality to triumph, also i was thinking this might actually be the way to "super realistic" graphic in games, you train the data on real life images and use it to replace the in game graphics and tada: real life graphics quality in game.
This is 100% going to be the case, because even if the game devs know it's the wrong choice, corporate greed will make it inevitable
Starfield was my first real run-in with this issue. Running it without FSR or DLSS is a fool’s errand and I kind of hate it.
I initially typed this next part elsewhere, but I’ll share it here as well in the hopes that someone who knows more than me can see it and correct me:
>This is really cool stuff, I’ve been using the older versions of DLSS for awhile now, but is it a stopgap (for gaming)? Is traditional rendering dead? Will games soon become too ‘high fidelity’ for cards to keep up? Idk, it’ll be interesting to watch this space over the next 5-7 years, but I would choose traditional rendering every time if my GPU could handle it.
>Specifically, DLSS-G (Frame Generation) just kind of seems like a cop-out to me. The cynic in me says it’s cheaper to make a card designed around Frame Gen than it is to make an actual ‘generational leap in power’ GPU, so that’s why Nvidia is doing this.
You're right about companies relying on DLSS to have good framerate, but I don't think that's necessarily a bad thing. I mean we'll get really good graphics with really good frame rate, so what's the problem?
rip game optimization
Not really, you're just another throwing the word optimization around when you have no real idea what it actually means.
Us genz will get 500fps later on for these games anyway who cares Turn off ultra setting and unreal 5 game still possible
@@wing0zerookay then, tell me why game companies are putting out shit games that perform shit on great computers then, go on, I'll wait
@Donfeur not the OP, but what has shitty optimization to do with DLSS?
@@wing0zerolol is this really the best counter? "You don't know what that means"? It's blatantly obvious that companies are going to crutch on this for FPS and push out unoptimized messes, where they lazily put processes on the CPU that take time to code properly for the GPU. I don't need to speculate that will happen when it already is happening. Vex has 2 videos going into explicit detail on the CPU issue.
Asmon reacting to two minute papers. Never have seen this combo coming. What a time to be alive! 😂
I wasn't ready, I didn't hold on to my papers!
Next step is him reacting to AI explained
@@aquinox3526 AI explained will break his mind lol.
Or David Shapiro@@aquinox3526
I hate that he says every word as if its a sentence. It's impossible to listen to.
The only issue with DLSS being mainstream is developers won't push to make their games optimized they'll just tell you to use DLSS. Games should be optimized and have great DLSS support.
Imagine using an AI to optimize the games for you. I'm sure it could be possible!
Imagine an AI to be you. I'm sure it could be possible!
@@joonashannila8751 Just look at kwebbelkop...
@joonashannila8751 imagine an image iam sure it could be possible
This! Just look at Starfield and Alan Wake 2 for example.
Some of the unmentioned negatives for anyone curious: Input Latency and Motion Clarity. Framegen can't make the input latency better than it's native pre-gen latency with reflex enabled. All temporal techniques (TAA,TSR,FSR,DLSS,DLAA) introduce some amount of temporal 'blur' to an image compared to a non-temporal solution (MSAA, SMAA, SSAA), but the gap is slowly closing. Not quite ready for competitive shooters like CS and Valorant where clarity and latency is king, but it's getting better. RTXDI is another interesting tech if you're curious about "What's next". Basically Nvidia's Path Tracing tech. Some real "Oh, they're doing magic now" tech.
You don't need DLSS for competitive shooters anyway
20 FPS boosted by DLSS to 60+ FPS will still feel like 20 FPS, right?! For many genres that's borderline useless then.
@@Charles_Bro-son20fps feels nothing like 60 unless you’re brain dead.
@@jaleshere Could work pretty well for turn based gameplay. However realtime gameplay dependent on reflexes still feels sluggish, despite the fluent graphics. I don't like where this is going - looks good in gameplay trailers, feels shitty at home when actually playing...
@@Charles_Bro-son Correct, it's a fake 60 FPS. So it will still feel like 20FPS input-latency wise. Useful for flight simulators and turn based stuff, horrible for first person shooters.
the tech to upscale things like this could also be used for ingame textures, that way no matter how close you get, the textures will be actively upscaled or downscaled depending on the relative camera position, similar to UE5's Nanite.
Could really lower hardware costs for planetary approach on games like NMS, Elite dangerous, or Star Citizen.
holy shit i think you just described reality
you mean like dynamic resolution scaling? Yeah that's already a thing. Been for a while.
Its already exist for more than 10 years, its called mip maping.
Although I like the idea of upscalers used as a way for older cards to get more performance, since nvidia limits it to new cards it's only a matter of time before they start using this "performance" as a selling point.
They already are lol
Ray reconstruction isn't about performance at all, it's for image quality.
God Bless FSR 3. The fact that it works on all graphics cards, no matter the age or brand is insane. AMD truly playing the side of gamers since NVIDIA got so complacent and uncaring after making AI money.
Addition: For the NVIDIA dickriders, this is not hate for Nvidia. This is competition loving. I don't hate Nvidia, I have used their graphics cards all my life and have only very recently switched to AMD. They are both very good but the only loss I suffered in the switch was Nvidia Broadcast. Other than that, both of them are functionally the same. However, being able to use FSR tech through the AMD Control Panel is incredible. You can have FSR in every single game even if it's not baked into the game. That's incredible. AMD is surging through accessibility for all gamers and I seriously hope because of this, Nvidia figures out they need to do the same otherwise they'll continue to lose good will. Their GPUs have already caught up, their CPUs have caught up to Intel, and their prices are normally cheaper across the board. They're doing super well and all the hate for Nvidia because they're simply not Nvidia or Intel is depressing. These are companies that don't care if you shill out for them, what they do care about is sales and numbers, and currently, AMD is sacrificing profit for goodwill to get ahead of the competition and it's working. Later on they'll maybe pivot, but right now, take advantage.
@@Wobbothe3rdthat isn't an upscaler. He is talking about DLSS not ray tracing
Thats why you either suck it up for nvidia or buy the competition
I remember learning about this in game dev school few years ago and thinking to myself: "Oh I can't wait for it to be used in games!" Good times coming!
@@ramonandrajo6348how's it a gimmick? Its a standard for how good it is it improves your fps dramatically without affecting the look of it noticably
@@ramonandrajo6348dlss isn't a gimmick which this video is on about....and if your on about ratyracing it also isn't a gimmick lmfaoo, it's hard to run yeah but it's a major game changer to graphics
@@ashleybarnes9343 It is a gimmick, as well as being a crutch for extremely Unoptimized resource utilization that is non-functional without it. If a product needs to be de-scaled and then checkerboard the screen up to “full res” to run in the “recommended” specs, your game sucks regardless to FX used.
@@ramonandrajo6348it literally triples your frames and reduces input delay lol wym
It still feels like it's being used as a crutch for poor optimization.
When they say DLSS 3.5 for all cards it’s confusing.
Only 40 series + cards will have frame generation thus have vastly different fps if that makes sense.
Ray reconstruction is entirely separate from frame generation, it will work on any rtx card.
@@Wobbothe3rd Working is one thing, working as intended is another. They're trickling down features to older series just enough to let you crave the real deal (which is more feasibly achievable on 4070 and above), to prompt you to give in and upgrade. Genius marketing with no backbone, typical of late-stage capitalism. =)
You just need tensors to run dlss (and ai processes) and all rtx have them
I hope the rumours that Switch 2 will use this tech are true. For me it's the perfect use case, because you can't make hardware super powerful if you also want it to fit in a handheld. And this would compensate for that. Yes, it has some drawbacks, but it would negate most of the problems current Switch faces.
Doesnt make any sense at all. Nintendo uses no cutting edge hardware or software and the only hardware using frame gen and advanced hardware accelerated upscalers are Nvidia gpu's which will never be in a small form. Mobile = APU/integrated with no dedicated gpu. Also is expensive to produce wihich Nintendo never deals with.
it's most likely going to use an Nvidia APU which yes will probably not have frame gen but dlss would still be available @@Ay-xq7mj
@@Ay-xq7mj iirc the rumored chip Nintendo will be using isn't exactly the latest or cutting edge of Nvidia in terms of hardware. it will be on ampere which at this point is a couple years old and will be replaced by Nvidias chiplet design in the 50 series ~2025. So Nintendo is quite literally gonna release the Switch 2 on hardware that Nvidia has matured at this point. Switch 2 WILL have DLSS and Frame gen available to it.
astroturfers have gone mad... lol.
Sorry Buddy but that ain’t happening😂
I have been following "2 Minute Papers" (I just can't remember this doctor's actual name) for years now, he's a genius.
This "illusion" of creating more frames by having the computer predict where and how pixels are going to move in the next frame in order to place a blend in between has been used for years now in video production with the purpose of creating fake super slow motion videos, as in GoPro Hero promo videos, which feature scenes that were just not possible to film with their cameras back in the day. The name of this feature was known by the commercial name "Twixtor", and it was very popular back then.
However, it took hours to render just a couple of minutes of video with this effect, what is actually new here is that it's working at real time now, which is absolutely mind blowing.
"What a time to be alive!"
The de-noising is being done simultaneously by the upscaler itself in one go now.
witch results in higher quality and more stability.
Yep!
Don't know about quality considering how much more blurry the games look
@@MGrey-qb5xz Watch the latest Digital Foundary video about Cyberpunk 2077 v 2.0 ;-)
This sounds like how our brains generate images- take very low fidelity and make up the rest
What does that make porn?
Two min papers guy is so cool. Long time subscriber. He brings great insight into the world of AI GFX stuff first hand. I love it.
the way he talks makes me want to jump out my window, why doesn't he talk normally?
Do it. Jump.@@astronotics531
@@astronotics531 Because english isn't his first language?
@@astronotics531 did you know there's 7000 spoken languages in the world?
I couldn’t finish the video. There are millions of non-native English speakers out there but I have never seen someone use so many commas while speaking. He has the vocabulary and the pronunciation but wtf
10:10 Bethesda Release DLSS3.5-compatible Skyrim and call it another remaster
"This makes the barrier for PC gaming that much lower".
Yeah, you only need a 40 series GPU for frame generation, they're practically giving it away! /s
The problem is that barrier of entry won't last long. For now this technology makes new games accessible to older hardware, but games going forward are going to be developed with this technology in mind, so the requirements will go up accordingly.
Correct. It's sort of like giving everyone money to put towards their rent. Eventually the rent just goes up.
Older or very cheap hardware can't do DLSS.
Man could you imagine if this kinda tech dropped on the next nintendo handheld or really any other portable system. The ability to boost performance using optical flow techniques could be the key to keep those types of devices small and lightweight while competing with home consoles and PC.
Some potential leaks about the next Nintendo hardware (Switch 2, or Switch Next-Generation, or wtv it's gonna be called) mentioned the fact it is probably gonna use DLSS/FSR3 along with better hardware to make games such as Final Fantasy 7 Remake or the Matrix Demo be able to run on the handheld console as if it was run on a like PS4 pro
when you start using those tech, it is a simple proof that the hardware sucks @@arfanik9827
Supposedly this is already confirmed for the Switch 2
@turbochoochoo Source and Evidence please Mr.Nitentoe
without RT and DLSS, we would all be playing 4K right now with a 300 dollar card if they focus on rasterisation instead of waisting 50% of the pcb on creating an artificial battletech with AMD and intel, to force customers into buying their product with totally non mature technology. gives 4K to everyone first, and 8K to rich people, then work on reflection. Who care to have 90fps written on fps indicator when in reality there is only 15fps running with all the latency/ghosting/noise/color shifting, blurryness/frame distortion/cutting problem it involves. Give us 60 fps 4K with decent price, and improve shadows, water, texturesn then create real RT. Because right now it's kind of shame in 2023 to see a 2000 dollars card unable to stable 60 fps @1080p with max settings on some 2020 games such as cyberpunk or flight sim without using artificial frames and resolution through DLSS.... If to continue with that tendency, you rather watch a screenshot slideshow rather than playing videogames.... It's totally stupid because RT impact on games is more important than switching from 1080p to 4K. It's as demanding as playing in 5K. And we could all be gaming at 4K without these tech, and honesty I don't see anyone in the world trading 4K for RT if they could see the diff between 1080p and 4K, they would not give a damn about RT.
thats a pretty good point, you should really up your resolution to at least 4k before you worry about RT, but at 4k there really isn't anything that can run RT.
2:10 "guys, I think that's pretty fast" sums it up perfectly 😊
The plot twist is that the technology for 3.5 is advertised strongly on the comparison videos with frame generation on while also pushing strongly that DLSS 3.5 is available without frame generation for all RTX cards. A tad of manipulation there. The real difference will be 5-15% at best depending on the game. At least it will look better with more or less the same performance.
I think a good way of looking at it is the comparison between brute forcing computation, vs becoming increasingly better at predictive visual generation. Imagine if we used our brain as an example, vs the brain of a whale. It can weigh over 10 kgs, vastly outscaling ours in terms of sheer size and also in neuron connections. But the reason our brain is different, is due to how different parts interact, how some neural links are formed, how our process of learning over time to rewire our brains to constantly become better at things we set out to improve. That is what truly sets us apart and allows us to go from the intellectual level of a wallnut as infants, to gigabrain 5heads (on occasion) as adults. Neural deep learning AI does the same for computational graphics, it entirely reworks where the effort is put, it forms 10 99% positive connections before a brute force method has even calculated a single 100% step. And with time, AI will only get closer and closer to 100% for each calculation, to the point where we can't actively tell apart the differences - which is pretty much where we're at right now. So from this point, the scale is only going to further extend in favor of AI accelerated predictive generation, rather than brute force calculation.
Just like how we read words in a book, we don't need to read every letter one-by-one to figure out what the entire word means. We can use predictive deduction to skim the possibilities so fast that we are able to make out even complex words practically in an instant, which allows us to read books in real time. It sounds strange, but when you think about how many possibilities we are filtering in milliseconds to arrive at the right conclusions in real time, it's crazy. AI is literally that. And yes once in a while (just like us) it gets a prediction wrong, and there can be brute-force reliant methods to fact check these more high-risk generative predictions to ensure that false positives get reworked, and this process can be done in a millisecond too because the actual hardware isn't working overtime on brute forcing every frame on its own.
Frame Gen may seem like a gimmick now, but in cohesion with DLSS and ray reconstruction, I think within 2 years a most this will be the standard that takes over from brute force hardware calculation when we look at general performance levels.
You raise some valid points.
It's true, the tech we've been given here is next-gen. The fact that Nvidia did it first is a logical consequence of their many advantages over AMD. They have the brand recognition, the market share, and, most importantly, the resources. This is the turbo-capitalist reality (all else be damned), and Nvidia hasn't shied away from banking big on the recent AI hype either.
AMD has only ever retaliated retroactively; playing catch-up while setting competitive pricing has always been their signature (case in point, they failed to rival Nvidia's flagship; RX-7900XTX is as far as they could go).
FSR is also an attempt to mimic DLSS, and AMD lagged behind nearly 2 years with it. FSR 3 got released yesterday and it is hardly on par with DLSS 2, let alone 3 (to say nothing of 3.5). In short, a disaster, but a promising one if we're to have some faith in their Fine-Wine technology.
I appreciate the idea of DLSS itself in a limited set of scenarios, but I will not give credits to Nvidia for implementing it first. This is simply the way the GPU market conditions have aligned for them to end up as the DLSS pioneer. Historical circumstances and the perks of monopoly. These are just farcical levels of RNG at play.
If AMD released FSR before DLSS (and made it just as good), things would've played out differently. But that's one too many ifs and what's done is done.
Right now I'm more concerned about a future shortage of GPU manufacturers for the masses, because recent trends show that AI and DataCenter are far more profitable moving forward. I fear the day AMD stops developing GPUs because it will simply no longer be profitable, which would give Nvidia complete chokehold over the GPU market (ceteris paribus).
And I also dread the idea of using DLSS as a way to cut corners on proper game resource optimization because Nvidia can get away with an upscaling gimmick that tricks our monkey brains. (You can find arguments in favor of it, but I remain forever convinced that DLSS could never hold a candle to true frames rendered at native res with real computing power.) This creates a precedence that could soon become the new standard for gamedevs. Just as Apple did with the notch; it was just a feature for the iPhone, nothing groundbreaking in particular, but it became the new norm because of Apple's power to set trends and the masses' willingness (or obliviousness) to take it for granted. And now here we are: most phones have notches. I hope the same rhetoric won't apply to frame rendering in the future.
So, as it stands, this endless arms race between red and green only affects the average PC building enthusiast due to practices employed by both sides, with AMD over-promising and under-delivering, and Nvidia adopting bandaid, Apple iPhone tier tactics and planned obsolescence (crippling the performance of older cards with newer drivers to boost sales of latest gen cards).
I'm a GPU centrist, I've owned both Nvidia and AMD GPUs on and off, but I think neither team deserves to be spared from due criticism during this hell of a period for R&D. My heart and wallet say AMD, but my brain and foresight say Nvidia. And that's a really disheartening prospect.
I am trying to root for the underdog, but right now it feels like an inadequate time for it. Advancements in AI computing are rattling the world as a whole (let alone the gaming world), so I find it hard to imagine AMD even catching up with Nvidia in the AI division of their R&D for upcoming GPUs.
At the same time, I cannot endorse Nvidia's greedy marketing practices.
Part of me wishes that the AI hype would just wane already, but that's no longer an option. We've already unlocked Pandora's box. So, it makes complete sense that whichever GPU manufacturer outshines the other in the AI segment is sure to have a significant headstart going into this territory. And right now that manufacturer is undoubtedly Nvidia. I pray for AMD to get their shit together, and fast.
Alternatively, we could also hold on for a divine intervention from a third competitor under the name of Intel, but I have no illusions. Their focus seems to be on iGPUs now and for the foreseeable future.
@@GryffinTV I pretty much agree on most if not all the points you raise, and have thought similarly about AMD and their path. There has however been recent aspects of their business practices that have helped shed light on some potentially conflicting elements - for example the idea of AMD being the runner-up and therefore not being the turbo capitalist entity to push ahead compared to Nvidia.
From observing AMD's behavior over the past 3-4 years, it has become evident that AMD in fact actively choses to not bear the torch, even when they have the opportunity to do so. They could take the lead, but don't. In fact, it has led me to believe AMD (at least with their current plans of operation and business) are not actively interested in stepping out of Nvidia's shadow. I think for the time being, they're more comfortable staying as the perceived underdog, the competitor that plays catchup, the one fighting "against the odds" so to speak. The reason I believe this, is because AMD has shown a very clear bias towards mimicry, even when they could easily take the lead in sales or general market share, and I think one of the key reasons why they're sticking to the shadow, is because the risk of being the forerunner is also that you end up tanking the hardest hits if things do not go to plan. AMD has consistently struggled with internal planning, which does make sense because they're a smaller scale than Nvidia and they are more limited in resources, thus limited in options and leeway. If things go wrong for them, they fall harder, with fewer safety nets. So there is lower inherent risk in actively choosing to play the role that catches up and mirrors what Nvidia does, because it lets Nvidia pave the path and pay for that pavement upfront, which in turn lets AMD coast more easily and even if things go wrong, they will be safe in the fact that Nvidia by nature has gone down the same path so at worst they're both wrong, and in a duopoly market with no real alternatives, that is a very safe place to be. But if AMD tried to lead the path and Nvidia decided not to follow, in that case if AMD's path goes wrong then they're fucked.
Now Intel is starting to throw cogs in the wheel, but it's a slow process and it gives time for AMD to figure out how to position themselves in the next 10 years - and I think that may also be why we're currently seeing small shifts in market segments and what type of GPUs they focus on, as well as how they are redistributing their balance between consumer and enterprise markets. More than likely AMD is actively choosing to stay in Nvidia's shadow for now, while they test different waters to see potentially viable future paths to branch off to post 2030, and see what type of groundwork would have to be laid out to enable those paths. And while doing so, staying in Nvidia's shadow is simply more consistent and less risk. Two things a company in process of assessing change loves to be in.
Unfortunately we live in a reality where this will be a double edged sword for consumers. Devs will begin using this tech as a crutch for optimizing their games and we'll be right at square one. The games will be so poorly done that enabling DLSS will feel like just playing the game normally and then god forbid you don't use the feature or else you'll be running the game as a slide show. Now if the game studio has some integrity and puts in 100% effort into polishing their game regardless of DLSS then we'll get god tier games to play but with how big corporate greed is in terms of cutting costs, I have my doubts unfortunately.
Keep in mind, though there is support for DLSS 3.5 on Nvidia cards since 2018 (Rtx2080ti), they will not have the full capabilities unless you have a 40 series.
No, only frame generation is exclusive to 40 series, everything else is on any RTX card.
to be clear
DLSS 2 (SUPER SAMPLING/UPSCALER) is used on DLSS version 3.5 (nvngx_dlss.dll) which is available on all RTX cards
DLSS FG (FRAME GENERATION/DLSS 3) is also used DLSS version 3.5 (nvngx_dlssg.dll) only available on RTX 40 cards
DLSS RR (RAY RECONSTRUCTION) which is DLSS 3.5 (not sure if it will have a separate file or not) that's available on all RTX cards
yep only make you more confused
so basically
RTX 40 Series: supports all things offered by DLSS
RTX 20/30 Series: the same except Frame Generation
@@Wobbothe3rd I like how nvidia did not add FG for old gpus, just to be sure not to extend gpus lifespan
got one 😉 4090
@@darvamehleran786
Oh, and don't forget how NVidia doubled pricing for basically the same performance.
I first started gaming on an Atari 2600. This is like a whole new universe. It's almost scary to think what it will be like in 40 year from now.
Standard ray tracing isn't remotely worth the performance hit, and path tracing looks incredible but is prohibitively expensive. Outside of high end consumers who can afford 4090s, we're still waiting on the hardware for proper ray tracing in video games.
The sad reality is that it doesn't change much for the consumer because this just means most devs will get lazier. A lot of games optimize towards 60 fps and if they can reach that point with AI they will have increasingly high quality graphics but with optimization so bad it requires high-end hardware anyways to run. Ideally, practices don't degrade over time but I am not particularly optimistic seeing what games like Remnant or Starfield did.
Yeah, but in the short term at least, they'll have to code assuming it's not present.
I agree so much about the 6k bitrate. They need to up it on twitch.
It adds latency if you use frame generation. But normal DLSS with reflex is pretty good. Some weird smoothing occurs with movement but it's getting better just not quite there.
Not just that, the ghosting is really annoying to me.
Yeah you have to run it on quality or it looks terrible. @@stysner4580
The Latency hit when gong from 15 fps to 88 has to be insane, also I prefer last years ai generated woman with 7 fingers and weird teeth
Since DLSS works so well for nvidea the newer 4000 series cards already have worse component on them because they expect you to use DLSS. Im all for DLSS but nvidea is just to profit hungry it is sad.
2 Minute Papers is on of the most underrated Channels on youtube!
Its like listening to Hercule Poirot explain who the murderer is in the end of an episode. I love it😀
Dude needs DLSS for the way he talks 😂
Lol
this guy's channel is amazing if you are pasionate about these kinda of things. I've been following him for a while and his videos are top notch, great explanations
EDIT: they just dropped update 2.0 for Cyberpunk, 48GB so it's a great time to start a fresh new save (as they recommend)
good to know was waiting for the dlc release to see if a new mega patch happens after
I love this type of content, but my ears bleed from hearing this ANNOYING narrate, I prefer hearing the TikTok AI voice generator over this tbh.
Wow very cool. But my ears are bleeding from him stopping every half a word.
It'd be. Amazing. If he could stop. Adding full stops. Mid sentence.
I do... enjoy... his content... and I have... seen some of... it before... But... I just cannot... get over... the way that... he talks...
The only bad thing about this is that devs have been relying on this for games to run instead of optimizing them properly. Either that or graphics card manufacturers actually tell them not to optimize for older cards so we have to make the upgrade.
My 1070 looked pretty decent when it came out running most games I wanted to play at max settings 1080p and keeping stable 60fps if not higher (I have a 170hz refresh monitor).
Newer games don't look like they're that much better in terms of graphics but my 1070 just struggles to get 30fps on low setting at 1080p and that shouldn't be happening. That's just bad optimization from the devs that are using DLSS or Fidelity FX as a crutch, as games have not made the graphical fidelity leap these hardware requierements are suggesting.
The issue is, if it's using two frames to generate an in-between frame, there'll be input lag because it has to wait for the third frame to be generated naturally before it can create the 2nd frame to display. But if it's using AI to "predict" the next frame based on the already displayed frames, then there will be errors, especially when things on the screen are changing a lot. The AI can't predict that you'll decide to turn around or fire a big gun or whatever at that moment. Hell, it almost certainly won't be able to dig into the game mechanics to accurately predict even what the enemies and NPCs will be doing next.
This is the comment everyone needs to read. DLSS is not gonna be great for less casual gamers.
Excited for ray reconstruction. Frame generation, on the other hand, adds input latency and is only available on 4000 series which should be powerful enough to run everything at 80+ fps without it anyway
I just ran the benchmark for CP2077 2.0 patch with high settings and path tracing, and without frame gen, it would be ~50 fps.
You're right for most games though.
Frame gen doesn't add input latency. It just doesn't fix input latency.
@@wholetyouinhere when games start being made with DLSS in mind and don't optimize because "lol we have DLSS" then yeah it basically becomes "adding input latency"
I love Two Minute Papers videos.
alot of people doubt DLSS and think its bullshit but its as real as it gets u basicly sacrifice a small bit of ur input latency and it introduces a little bit of blur but the technique improves so insanely fast that soon its gonna be almost impossible to notice these flaws i remember when DLSS 1 came out and u could rly feel it yet the improvement to performance was already astonishing well now u basicly only got the upside cause they minimized the downsides
There’s DLSS and DLSS-G. DLSS-G is Frame Gen. Right now, only the 4000-series RTX cards can make use of it.
DLSS lets you run your games at a lower resolution scale and basically maintain the same visual fidelity. DLSS-G is the one that actually generates the frames.
I am still skeptical of DLSS-G bc it feels like a cop out and a reason for Nvidia not to keep improving the ‘raw horsepower’ of GPUs.
at 13:17 he is so fucking wrong I looked it up and got this "Globally speaking, there are over 1.8 billion PC players, about 151 million PlayStation players, over 63 million Xbox gamers, and a little over 87 million Nintendo players.".
I spent little over $600 dollars to build my PC this year. The barrier for entry really isn’t that high. A big part of why people don’t get into it is because they perceive it as super expensive and also because they think PCs are complicated to build and don’t want to learn. Consoles are popular because people don’t wanna do anything themselves and because they think they have to drop 2 grand on a PC when they really don’t
its seems expensive because the high end parts are more expensive than ever, but its actually never been cheaper to buy a decent rig. low to mid range parts are more powerful than ever.
Frame generation is a nice to have thing, but its seems to only be worth using if you're getting less than 60 fps..it does increase the visual smoothness etc. but from my understanding it has no effect on input latency so the input would still feel like 25 fps if that's what you're able to run without frame gen. It certainly depends on the type of game you're playing and the input method used.. I can't imagine it would feel very nice to use with keyboard and mouse as that is more sensitive to input delay vs controller. Also, at this time frame generation requires RTX 4000 series cards meaning at least an RTX 4060. I think it will be way more interesting if AMD FSR 3.0 allows frame generation on all generation of cards as it seems more of a low end GPU use case thing.
the thing is that you can tweak the resolution and setting so that you start generating from 40-50 fps, and get 80-100. That's way better than playing on low settings in 60 or high in 30
@@Qrzychu92 this proves my point even better, who is more likely to turn down resolution? High end GPU owners or low end? And even DLSS upscaling from anything below 1080p looks pretty bad..it does retain more quality than FSR but ideally both tech should be used at least 1440p resolution. The higher the base resolution upscaling from the closer FSR matches DLSS in image quality, I think most people put too much stock in DLSS quality vs FSR because they both suck at low resolution lol.
@@Coffeeenjoyer31 "And even DLSS upscaling from anything below 1080p looks pretty bad" TODAY. Don't forget they are working on it.
I can easily imagine that in next year or two, upscaling from 720p on high will look better than native 1440p.
For today, you are right, just don't forget that there is tomorrow. If you told somebody during the RTX20 series premiere that there will be AI powered frame generation RECALCULATING THE RAY TRACING for the new frames, nobody would believe you, or say that it's going to be viable at all. But here we are now :)
@@Qrzychu92 that's a fair point, we will have to see how it progresses. However it will be a challenge surely because lower resolution is just less data to work with. I certainly wouldn't recommend a product to anyone based on future improvements.
@@Coffeeenjoyer31 DLSS can already generate 80% of the frames according to this very video
All I know is that I feel tired after listening to the AI voice for 15 minutes.
What a time to be alive!
The First Descendant has this, and it works almost exactly as advertised. The frame gains while still having tons of quality is amazing.
Dlss 3.5 is a de-noiser that’s supported on all rtx cards for better looking Path Tracing(not ray tracing and doesn’t affect performance). Dlss 3 is frame generation only on 40xx cards. Makes sense right.
I hate DLSS for one reason. Nvidia is now selling you software, instead of hardware. You are not paying for a product, you are paying for a service. How much do you wanna bet, that this ends up screwing the customer? You really think they developed this for us to have smooth framerate? They developed this so they could charge us for the same performance, while not having to pay for the materials themselves. I just know this shit is gonna suck
The only problem with DLSS is that Nvidia make so you need to buy/re-buy their hardware to use these technologies, in comparison AMD is launching FSR 3 for not only their GPU's(older and newer ones) but for the whole GPU market.
Sadly nVidia seems to only put a little more Tensor cores into consumer cards than needed. The reason why a little more than needed is to get in front of AMD in performance. DLSS DLAA and etc. with their deep learning line of consumer technologies uses tensor cores not CUDA. So nVidia can and probably will do what you are saying and make more tensor compute intensive models that require more tensor cores to use in future cards to get consumers who want to use that feature to get a new card.
Exactly, im buying AMD card next time unless scammer nvidia supports my 3060 as well for dlss 3.0, not believing ''it wouldn't work well enough'' BS! If they can not support even last generation for new technologies ridiculously, there is no point buying 50 series etc as it might happen again...
@@ggoddkkiller1342it "works" yes but running without it runs better which is the sad thing. The value i see in the 50 series will be in the 5090 to upgrade from the 980 in 2025. The reason why the 50 series of nVidia instead of the 8000 series of AMD or the 800 series of Intel is 3D modeling program support. CUDA is supported by games too so that is why i use the same card for both. Gets in the way when the full GPU is needed for GPU compute in some applications but having a integrated GPU fixes that which i learned from not having one this time.
@@yumri4 Nah still not buying it, 4050 can handle dlss 3 but 3090 ti can not with many times more tensor etc cores?! Even my 3060 outperforms 4050 in every way possible...
@@ggoddkkiller1342 with a quick google i found that you are safe, the DLSS will support the GPUs from the 20 generations, but in my case, my old 1050 will only have the FSR 3
The problem is that this threatens the top GPU market because most people won't need to buy them anymore in order to play a game on Ultra with high FPS. I bet Nvidia already has amazing DLSS technology to make all this even better, but they won't release it because of their top GPU sales.
Nvidia are already locking their tech behind new GPU releases. They are not enabling these AI solutions on previous generation cards, so you don't have to worry about poor Nvidia losing sales. And developers have already started skipping optimization in favor of using upscaling tech, so you will need high end GPUs in order to achieve playable frame rates in the latest AAA releases.
Hardly. What we will instead see, is dev studios taking advantage of the higher ceiling in order to simply create better visuals for "high and ultra" settings. Like Cyberpunk's "Overdrive" mode for example which you literally need a 4080 or 4090 to run at playable rates even with dlss on 4k. And even then, "Overdrive" is still a compromise in many regards compared to full global illumination and caustics. It's not because we are in lack of dials to turn up. We have only turned them up by 5% at the current point, and 90% of dev work goes into figuring out how to make graphic compromises so the dials can be turned down without the visuals going to shit.
High end gpu market will always have something catered to it. Trust me, that's gonna be the least of our problems (3D designer by profession, if game devs could ad 5% of what this industry utilizes, you wouldn't believe your eyes).
The cynic in me says Nvidia love this - they will design their future cards around Frame Gen.
- in games where Frame Gen is not used, the FPS increase over the older cards will be less impressive.
- where Frame Gen is used [needed?], the older cards won’t be able to keep up whatsoever. Less savvy users will just assume their “old GPU” is trash and will buy something in the latest generation’s line-up. Frame Gen will give the illusion of increased power/performance.
You really think Nvidia is sabotaging their market like this? No, they're gonna lock this tech to the new GPUs. And when the marketing drives off, they're figure out a different technology to sell. They're not that dumb.
Besides, there's still people who want REAL frames like me.
They'd rather sell their AI GPUs anyway.
I am an older gamer. I started gaming at the arcade. Then played pong on my family's tv. This is futuristic technology to me.
Just marketing smoke... Games already have so much graphic options... It's embarrassing. And what do we have?? Another graphic option. The PC world is a mess. Meanwhile the hardware is failing left and right. Particularly GPU's! They're getting so hot now, pushing the 4K resolution at acceptable framerates, that is blowing the chips. Or blowing the connections in the chips... And actually, it's not just the chips it's also other components like the power connector debacle on Nvidia 4090's... So now GPU's instead of living 10-15 years... They barely reach the 5 year mark. Anddddd of course, more E-waste for landfills. Let's hope nobody sets their house or belongings on fire because that's also on the table, last but not least!!!! Have you seen the size of GPU's too?? They're literally BRICKS now, it's a JOKE! The coolers are so massive that you need to have some sort of support inside the case so the whole thing doesn't fall off and pulls motherboard components along with it. And this "4K race"...... There's something going on in the manufacturing process or the quality of materials that is just making garbage GPU's... So the whole industry... It's a snapshot of the actual GPU's lolol It's so full of bs that is crushing itself!!! This thing doesn't have a leadership with a clear objective and it's being run by lunatics. We don't need this bs.... What we need is BETTER games. There's barely any game that justifies having this kind of hardware lolol
The negative of frame Gen is the 4x increase in input latency enjoy games feeling like shit.
Two minute papers is GREAT channel. Highly recommended.
He speaks like middle schooler where English is not spoken everyday trying to read English.
@@blushingralseiuwu2222 I so no problem as long as he is understandable
@@blushingralseiuwu2222 What is the problem? He is not a native english speaker. Didn't you understand anything he said? The guy is incredible at Light transport research and always shows the newest papers in tech, you get that anywhere else on this platform.
@@blushingralseiuwu2222 He is Hungarian. How is your Hungarian?
Convince me he's not paid by Nvidia. I'm not convinced.
image looks smoother input lag stays the same based on the original frame rate = action games unplayable
Its true, I downloaded the DLSS 3.5 mod for starfield, and that shit literally doubled my FPS. I went from 50-60 frames in cities to like 110 frames on my 4080. That shit is fucking god tier
It basically blurs the game just the right amount to keep fps consistent, if you Don't mind taa even bad taa then go for it
@@MGrey-qb5xz not sure what youre talking about by that because with the DLSS 3.5 mod, the textures and objects are sharper than the vanilla. It does the opposite of blurring.
@@husleman probably using a sharpening filter alongside it to compensate
@@MGrey-qb5xz I am not. All I have is what ever the DLSS mod files included.
50-60 in Starfied on a 4080 are you mad?
Todd Howard is a clown
This does less showing how impressive DLSS 3.5 is and more raising questions about why a 40 series card still only gets 20fps in Cyberpunk, the same fucking performance my 20 series card was getting.
It is very expensive magic that still doesn't look very good.
DLSS 3 is going to be available only to the latest most expensive nvidia cards.
AMDs FSR3 will be available to ANY video card using DX11/12, with some functionality limitations depending on the card capability, and it will be doing the exact same shit and more.
Problem is this is a way for Nvidia to sell software instead of hardware. Which will mean you will always have to buy the next generation of graphic cards every time to get access to this software, even though it could probably work on any GPU.
@webtiger1974 matter of time lol doesnt matter if its nvidia amd or intel if this shit gonna be in all games every company will implement something like this 100%
Nvidia is now primarily a software company.
Wrong. This technology does not work on any hardware. Hence the tensor cores. AMD does not use em, slaps cheap extra ram on the card and people lap it up.
@@cirescythe AMD is normally accepted as the best general hardware company (so stuff that's for like work and not gaming). Their stuff doesn't have tensor cores because they don't need it, it's not the audience they cater to.
@@Hiiiro They're as much of a software company as they're a candle company: not at all.
i remember the first release of raytracing was a huge milestone and now they advanced so much incredible
Imagine what kind of world we could live in if we spent as much time and effort on all tech as we do in video game tech.
I mean the chinese produced autonomous killer drones that can pathfind through forests and use facial recognition.
umm AI, robots doing backflips, personal drone cars, jetpacks etc...
We do
Alot of tech isn't developed primarily for videogames, it working for games is merely bi product.
Heavily increases input lag.
DLSS (Deep Learning Super Sampling) heavily increases input lag due to the waiting time of holding back both frames to generate the AI frame.
well at least we all know who the noobs are now, wanting fames instead of reduced input lag.
Can't help but think this gives devs especially those at AAA sphere a reason to disregard performance optimization even more.
I think they’re waiting for the technology that makes it so AI does all the optimization for them which DLSS basically does in a way already
No it doesn't. DLSS trades various performance metrics - sharpness, clarity, motion stability, resolution - for other metrics. The grift works because we typically don't measure the metrics that are sacrificed. None of this is to say that DLSS is inherently bad, but it's definitely not "free" performance, and it's definitely not legitimate optimization.@@RlVlN
I believe the Frame Generation feature is only avaiable for 40X0 series cards.
16:23
Night and day, I sprung on a 165hz G-Sync certified monitor back in 2016, directly from 60hz to 165hz was wiiiiiiiiild how smoooooooth it felt
Not sure if it’s the same as this but I’ve already downloaded a few apps that use AI to un-blur some not so good photos I’ve taken on a trip. Couldn’t beleive how well it works!
Problem we started seeing nowadays after DLSS became mainstream is that devs design games to only reach playable framerates WITH dlss or FSR enabled, which is very counterproductive. It will actually end up raising the barrier to entry instead!
All part of the plan. When devs were using traditional native rasterization, games scaled very well with older GPUs. This was obviously cutting into Nvidia's sales, so they needed to create a business model that forced customers to upgrade - and Nvidia needed to do this without relying on development of more expensive (for them) hardware with legitimate generational performance uplifts. The solution has been AI software, which has become Nvidia's entire focus because it is way more profitable than delivering better hardware. Now devs are programming with these AI software solutions in mind, and Nvidia is locking that software down to new overpriced GPUs that feature only minimal performance improvements (keeping Nvidia's own costs down). The end result is going to be marginally better looking games at the cost of optimization and an upgrade cadence that will price out the vast majority of consumers. Eventually, people will be forced onto cloud gaming. They will own nothing and be happy.
@@wholetyouinhere Very pessimistic outlook, but all signs point to you being right.
The feeling of playing on 30 fps, now on 60 fps
Plenty of people have enjoyed games at 30fps, and not all rendering is interactive.
and with the input latency of 30 fps, and an image that doesn't quite look right with blurry edges and random artifacts, welcome to the future kids!
anyone below 40 series cards dont get frame generation.... thats what ur seeing 15 - 80 fps jumps. Again if you have a 20series or 30series cards upscalling is only really usefull on a 1440p monitor as upscalling will just make ur game look like mush at 1080p. just an fyi i think asmong got this all wrong.
This blokes lungs have air for 2 words only
"Don't worry guys, DLSS will make the game optimized"
I love dlss 3, it's working quite well for some games, most of them single player but I even use it for darktide 40k and enjoy 122fps on average and I can play without tearing.
Game looks amazing and the technology was the deal breaker to go for an overpriced 4070ti.
I had a 6800xt and problems with VR like crazy. The Nvidia works well with my quest 2 and link cable (much better than the amd).
What Nvidia is lacking is their quality and GUI and function of their drivers. AMDs technology is good, but amd lacks some niche things (Like VR or at least the few games I am playing weren't working well😂)
So far so good. Hope everybody's card is working well, regardless of AMD or Nvidia or Intel. The best should be that we all can enjoy those marvelous technologys
Personally i dont like DLsS 3 in games with ton of movement because it produces ghosting which means several semi-translucent backends of object like car following the actual object or car when in motion
Ray reconstruction and frame generation are Two different things. You can have one without the other
AMD cards have never worked right with VR headsets like the Quest2 that utilize transcoding. They just have bad video encoders and worse driver support for them.
I ended up getting a 3070FE for Darktide and it still ran like total trash ran better months later but no by much. Its a gimmick same with ray tracying.
@@ramonandrajo6348 AI based upscaling is a gimmick? That why Intel and AMD are rushing their own version out the door? It's the future of computer graphics whether you're onboard or not.
100% Nvidia makes this a monthly subscription. Why would they release something that would make their GPU sales go down dramatically over a 10 year period?
There will be no subscription. GAmes would just became more beautiful and less optimized to compensate.
GPU competition used to be about raw performance. Look what it has become right now.
The only thing they need to do is just shut their mouth and optimize the game carefully before shipping it. And we simply don't need some "tape" to make it just playable.
Nice to see two minutes papers here. I already watched this video a while ago, but I'm fine with rewarching it for the reaction.
DLSS is basically the version of how video encoding works but video game style. Meaning the hardware doesn't need to render everything in between frames but the software can just fill it in. Like in how videos, the entire scene does not need to be rendered in but only the moving parts.
I mean kinda.. I am not sure I would use video encoding as a comparison to DLSS. I get what you were trying to say but it's not terribly accurate to compare the two at all. Especially since DLSS is ultimately not software based. It is using the AI cores on the GPU to do the compute. It is still very much a hardware based process and still very much rendering at the end of the day. It is just taking an alternate rendering path. Rendering is just literally the act of creating an image.
@@JathraDH It's not a hardware thing, its a software thing that is hardware accelerated. And video encoding is also commonly hardware accelerated. Ever heard of NVENC?
@@bethanybellwartsEverything is ultimately a software thing that is hardware accelerated which is why we call it "hardware" and not "software". When we call something "software" it means the CPU alone is doing the processing.
If we could do AI efficiently on a CPU alone we wouldn't need GPU's to run AI in real time now would we?
Encoding however is the act of taking 100% complete data and figuring out which parts of it can be removed while keeping the end quality at an acceptable level. It is a subtractive process (outside lossless formats). The replay of encoded video really takes no work because the work is done up front on the 100% complete data set.
AI is starting with an incomplete data set and trying to create more data out of nothing. It is an additive process and the work is done in real time which is why it is so intensive to do.
AI and encoding are actually diametrically opposed opposites of each other, which is why I feel comparing them isn't really proper. They are fundamentally not remotely the same type of process at all.
Similar but with some key differences: compression means you can analyse the next frame to get the motion, DLSS predictive frames is much more complicated because of the real time data in input and the fact that the AI pipeline generates the vector based on the knowledge it has of the probable evolutions. 3.5 is just even more over the top as the training data takes into account lightning specifics, as if it understood the image and the physics behind.
Some artifacts can be generated because in a game you don't know what pixels will move next, so the more FPS at the start, the better the result.
1 out of 8 frames is actually rendered, yeah no thanks I prefer not to wait 875ms for my input to register 💀
I actually couldn't watch this video because of the guys narration voice. Jesus holy christ
There is ONE BIG ISSUE with that all technological progress. And that issue is greed. Games are not made for us to have fun. They are made for us to upgrade our PC to have fun. Devs dont give a **** abou game optimalisation. They even admit it (starfield). Now they are even more agressive tools like dlss 3. Next year games will be bearly playable (~30-40fps) only with dlss 3, but will offer 6-8 years old graphic. But fear not we will get new 5000 series with dlss 4 wich allow us to play again in 60+ fps. And then repeat.
All that new technology is great and we as gamer hope games will be made in first place to work good without it, and then that technology will be added to further enhance our in game expiriance. Is it? Or maybe I am right and only good use of for us is that, devs use it to make games somewhat playble years after release.
If you're confused why the framerate jump is so massive, it's because they're getting those numbers (15-85fps or whatever) by using frame generation AND by using upscaling. So upscaling renders at a lower resolution, upscales it to make it look pretty, which probably gets your framerate up to liek 30-40, then frame generation doubles that
i have a 3070 and there was a lot of hype around DLSS a few years ago so i expected greatness but honestly it doesn't do shit for me except lower visual fidelity and maybe MAYBE give 1-3 fps if i'm lucky. so color me skeptical. maybe 3.5 is good but, doubt
Weird.. In diablo 4 with all graphic settings maxed, with dlss off I get around 80 fps and with dlss on I get like 130 fps
Yeah either something is wrong with your PC or you decided to match your 3070 with a really old CPU
@@wing0zero i did skimp a bit on the cpu and went with a 3600 but it doesn't bottleneck a 3070 at all
Ray reconstruction is totally independent from Super resolution, you can turn it on without turning on anything else.
@phoenixfire8226 no other definitely xcan, depends on the title, settings, resolution etc but see testing where that card is clearly limited by that cpu under certain conditions
Ai powered pcs will eventually auto censor shit on your screen when your browsing the web lmao
There would be "jailbreaked AI". As always ;-)
@@igorthelight Aw fuck yeah no doubt!
There is an argument for not using frame generation. Namely input latency. But the rest of the tech can feel a lot like a free win. I cannot see any difference in picture quality turning on DLSS 'performance' in any game I've tried that has it. I honesty think it makes Cyberpunk look better. (I suspect it's better at AA than the traditional AA as implemented on that game.) So that extra framerate essentially has no trade off. -- Note: It's real framerate when just upscaling with no frame generation.
Next time I play chess/Civilization - I'll be sure to turn DLSS on...
...for anything else - nah, I'm good.
If the game doesn't perform without this fakeness - I rather refund.
I don't think most people care about like 10-20ms more input latency. 99% of people will not even notice it.
@@tripnils7535
Casuals might not...
...but you really think it is Casuals who are forking over nearly $2k for a gfx card?
It's super-fans that are over-extending to get on the 4k hype.
10MS is enough to get you killed every time in competitive gaming.
So - this isn't for gamers... it's basically for Console players. A way to keep consoles relevant. :D
Anyways - I'm staying out of it. I'm happy to switch to Intel come next gen. Anything to get away from NVidia at this point.
Frame Generation must simultaneously generate smaller frame and than be affected by DLSS super resolution to upscale it. So if we look at the picture 2:59 the DLSS FRAME GENERATION must be as small as TRADITIONAL RENDER and start generated simultaneously while we upscaling TRADITIONAL RENDER fame, and then FRAME GENERATION have to be upscaled with DLSS SUPER RESOLUTION while next TRADITIONAL frame rendering. I believe it will make the process even faster than now.
Narrator sounds like Ren from Ren and Stimpy cartoon
I think it's a little bit awkward to group DLSS 2, 3, and 3.5 under the "DLSS" name when they all do COMPLETELY different things and serve very different purposes.
Even so, still I'm excited to where this is heading
DLSS 3 has 5 presets that are just a bunch of different DLSS 2 versions. 3.5 is what really brings something new
Does DLSS 2 use deep learning super sampling? If so, what does 3 do? What about 3.5? Since they're completely different. "I think it's a little bit awkward to group this AI with this other AI, they are COMPLETELY different applications of AI."
not really. most 'numbered' technologies are like this. that's why they have different numbers.
@@rewardilicious dlss 2 and 3 do the same, but dlss 3 has frame generation on top of it.
And RR is a new function of 3.5
@@rewardilicious DLSS 2 (nvngx_dlss.dll) = Super Sampling
DLSS 3 (nvngx_dlssg.dll) = Frame Generation
DLSS 3.5 (unknown) = Ray Reconstruction
all of them (their .dll files) are under version 3.5
New buzzword for rendering- GENERATING !
No to dumb it down. Rendering is created from raw power of gpu&cpu . Generation is an AI program that uses your frame rate an makes extra frames in-between to make a better frame rate.
Look at rendering vs generation frame by frame and you'll at least see the difference if you don't understand it from my explanation.
No?
Nah, rendering is mathematically figuring out what it should be. Generating is making up something close enough to reality based on the information you have.
Lets say you had to draw a regular polygon. Generating will be drawing a polygon by hand and making it look close enough, rendering will by doing the same but calculating the angles before hand and then drawing with a protractor to make sure it follows the calculated angle.
In the future developers will be able to make a game out of a literal PowerPoint presentation and technology like this will turn it into a living vibrant game.
My 60hz 5ms monitor and I feel called out
according to steam thats like 85% of people
The intention: Push games PAST their limit to bring games not possible through native rendering.
The reality: Devs spend less time polishing their games, using DLSS to hit 60fps
I mean the numbers are good but if the latency counteracts the extra frames you produce then things like FPSs are a no go. Single players etc it can definitely be useful though
The AI frame generation on Cyberpunk 2077 is absolutely amazing. Need it in more games.
I agree. I've been playing around with it all morning, and it actually encourages me to use frame generation now. There's so much more clarity and it gives me a reason to want to use path tracing combined with fg now.
Must say love the fight between DLSS and FSR. Let's see which technique will win.
FSR isn't close to DLSS lmao
Um XESS is already beating FSR and DLSS is ahead by an absolutr landslide lmaooo
@@meowmeow2759 no it isn’t 😂
@@c-tothefourth4879 according to the games that I have played with that have both XESS looks far superior to fsr
hear me out - the graphics cards are powerful enough to run the game, they're just improving the ray tracing constantly by trying not to require so much of resources from the card. That being said, the game runs 80fps with no dlss by default but marketers nerfed the non-dlss option so you would be amazed how smooth the game runs when you turn it on and you don't have to deal with the 15fps garbage no more. Anyone :D ?
The problem with the barrier to entry reasoning is that high-spec games tend to scale their requirements to whatever the high-end hardware can do at the time.
whyyy, does the, guy, in the video, speak, like, that
Asmongold wisdomed the shit out of one of the main issues with DLSS and techniques like it (it's pretty much Interlacing all over again).
It's good for looks, but it only fills in frames, doesn't and can't know about inputs.
So all you're getting is the appearance of smoothness with even less input response.
In high mobility games, these techniques will make the games worse, far far worse.
And you have to look at it like this.
If your GPU has time to fill in up to 50-100% of frames, it means the engine is leaving all that performance now used to fill in those frames unused generating real frames.
DLSS and similar techniques are pretty much a hack by GPU vendords to bypass the fact developers deploy shitty unoptimized shite.
Ray reconstruction has nothing to do with frame generation. Next time watch the video before commenting. You're totally wrong and misinformed about how dlss3 works.
Your comment confuses the heck out of me. The guy in the video said that using DLSS he was able to render his object in 3ms as opposed to 40-60s. How does that equate to the engine leaving performance unused? Are you saying that 3ms spent on generating the between frame is time the GPU could have spent generating a real frame?
@@Webberjo Any processing power being dedicated to “generate” transition frames (which is essentially what frame reconstruction is - like in cell animation, each frame between each “key frame” to denote motion is less detailed but contains more information) is being taken away from rasterization processes, what they’re saying is correct - everything being used for DLSS (which is both a GPU and CPU bottleneck btw) takes performance away from the actual rendering engine of the game that upscaling works on top of.