Nailed Reflex Frame Warp! Today, we don't really use "AI" to inpaint in the same sense DLSS does. We use an algorithm we call "Predictive Rendering". It compares the newly received camera against the depth buffer, color data, the previous camera, and other stuff to decide how to reproject. The magic isn't just inpainting the "holes" with the correct color, but rather minimizing the need to "guess" in the first place by reducing the size of those gaps with accurate warps. So much to talk about. Maybe warrants a sequel to the "Input Latency" video! :)
Reflex 2 and Asynchronous reprojection are really not the same. Both create gaps in the frame that need to be filled, but the similarities end there. Asynchronous reprojection is literally a substitute for good frame rates, can comfortably turn 30 fps into 600. Reflex 2 halves your perceived mouse input delay, but doesn't remove the benefits of having 400 fps to start with (which can only be achieved in competitive FPS), and in practice most likely warrants a high base frame rate, because at 60 fps the changes between consecutive frames are huge compared to 240 FPS, so more of the image will be filled in by AI, making it more distracting.
The warp technology is effectively a version of what's been used for years in VR - moving your head in VR needs to be immediately responsive regardless of the game's framerate, so it was pioneered for VR applications. Interesting that we're getting it on flatscreen games only now.
In VR the rendered resolution is a lot higher than the display resolution for that reason, no AI trickery needed when the "missing" space is already rendered, it is just performance heavy. Edit: higher render resolution is related to correcting lens distortion profiles, not to render a wider area in advance. Because mouse movement can be a lot faster than head movement usually, that might not work well for flat screen games and that is why AI was used to fill in the missing space.
the warp projection in VR was to insert frames, reflex 2 is reducing latency... no thats something different. and the problem is to fill the gaps, not to have an idea. so Nvidia invented the real soultion. frame generation is in TVs for over 10 years, yet the problem was to hide the latency.
Yes, and it was game changer for VR. It allowed to basically remove any cameral/head latency at the cost of creating empty regions on the sides (which Nvidia manages to fill). Without it, in VR, the camera becomes "wobbly", instantly breaks the illusion and creates motion sickness... With it, latency now only has an impact on how quicky the game reacts to input but isn't as critical for motion sickness
@@PrefoX In VR it's used to reduce latency as well of course, the whole point is reducing motion sickness, and reflex 2 does that by inserting a warped frame right before the real rendered frame is shown
Yeah it's bizarre that that didn't happen earlier the moment DLSS frame generation became a thing. VR games already started the trend so it should have just scaled up to all of gaming. For some reason that didn't happen.
He said in the video that Reflex 2 doesn't generate new frames like in VR, it just modifies the latest frame to be more in line with the latest mouse position (which might have moved while the most recent frame was busy rendering).
@@cube2fox I don't think this is about VR frame extrapolation like Oculus asynchronous spacewarp or Steam's motion smoothing. VR headsets use the same technique as reflex 2, except they fill the gaps with black bars. This way your orientation is always in line with the latest postion data from your headset
1169€ ... about 15% more than the 4080 super used to be, for what I'd imagine is about 15% more performance in raster workloads... The 5080 looks like downright DOGSHIT.
I hate framegen so much, it's so goddamn misleading. It has many downsides yet lots of people are going to get tricked by NVIDIA's marketting into thinking that it's a substitute for real performance, rather than just a neat feature.
the problem is devs not optimizing their games and having FG be a crutch "What do you mean our game is running at 12 fps on 1080p with your 5060? YoRsUpPosSeD ToUSe FraMeGen!"
Remember 2008, when you could buy a 4850 for $200, it could play Crysis and TF2 and basically any other modern game. And adjusted for inflation that's still only $280 The market is so fucked.
supply and demand my slime. back then graphics cards were a very niche product, only really used for gaming (which was still tiny compared to what it is today) because AI models didn't really exist. these days, the majority of graphics cards are used for AI. so like, the amount of graphics cards used right now for AI over gaming is huge. that's all it is.
To be fair I had a 4870, it could play Crysis, but it couldn't max out the settings. Which is exactly what happens today, the latest hardware can't play the latest games at max settings. I consider Crysis to be a turning point when games were essentially being made for the computers of the future instead of what was available at the time. You're right about the price though, I'm looking at upgrading right now from my GTX 1080 (as I consider doing every generation) and when I think that I spent £600 on that and even back then I thought it was expensive, it really drives the point home how ridiculous GPU prices are becoming. But on the plus side, with how hardware improvements are slowing down considering that my 8 year old GTX 1080 can still play the latest games with reasonable settings, at least any high end hardware bought today will remain relevant performance wise for quite some time.
you can still buy a good graphics card for cheap in this market. it just wont be the latest gen or have good fps at maxed out settings at every game. if you lower the settings from maxed to high on most games even 7600 can play at decent frame rate. the budget to mid range market is fine, unlike what the doomers are saying. its just that the higher end has increased dramatically.
The 5090 is 90x the transistor count and 3 times the die size. You can buy b580, 4060 or a 7600 and still get a very good experience today. And you get tonnes more features than you could ever get 16 years ago.
3:36 this is really cool, the text on the paper cup logo is almost readable with DLSS 4, just like with DLSS Off. With DLSS 3.5 it's just a blur. Really impressive
What game that 1. doesn’t support RT, 2. is GPU bottlenecked, 3. Isn’t capped by the graphic engine wasn’t already stagnant in uplift from 30 -> 40 series? For me I can think of none, justifying not including it for marketing benchmarks. I think the data is boring at this stage. The uplift is stagnant. It will be interesting to see the coming production software benchmarks though. It might be misplaced faith of mine but I think they tend to parallelise a bit better and show clearer performance gain with newer hardware.
The point the OP was making is that RTX GPUs are known to scale better gen on gen under RT workloads than raster. And since not all gamers turn RT on, even when they CAN, the raster scaling is important. Personally I always use RT, but there are absolutely people who still don't. @@Liam-fw1lc
Frame Warp might actually be exactly what I needed. On performance heavy games it kept feeling like I wasnt getting the optimal experience. Either go for high frames with dlss and frame gen so camera movement feels fluid, but receiving blurry, sometimes distorted images (especially going up stairs where the frame gen doesnt know wtf is going on). Or go for barely 60 fps but with a crisp image. I feel like I can easily settle for 60 frames, enough to make the game run visually smooth (enough) and having frame warp to give me the extra satisfaction of highly responsive camera movement, so I can actually benefit from my high refresh rate aswell
man i cannot believe it has been years since that camera interpolation video... You were spot on in your speculation and the amount of forward thinking ideas you bring with your videos always makes for some fantastic content! keep it up man!
I wonder why Reflex 2 doesn't simply render a little bit more outside of the visible view so it can use that for inpainting. Since it's at the very edge of the screen you can get away with rendering that at ultra-low resolution and upscaling it, so the performance penalty should be very small.
I always wondered this for Ambient Occlusion and Screen Space Reflection why they dont render more outside the area and then crop in a bit. There has to be a technical reason why we never saw that.
@@AngryApple SSR is used because it's a relatively cheap technique for reflections that reuses data that you already paid the cost to render. If you are rendering extra stuff offscreen just to make SSR better, you may as well just go all the way and do planar reflections which will render all the reflectable objects twice. But it could be a kind of middle ground I guess, less performant than bare SSR but not quite as expensive as planar reflections.
@@DeltaForce1522 No I just mean a little bit more so you dont have the weird screen edges where there is no AO and SSR. These 10-20px on each side is not that much but I personally always got bothered by it especially on big waters or so
@@AngryApple Ah I know what you mean by the edges. You definitely could render at a slightly higher horizontal res and fov, then chop the edges back off at the end. My guess would be it's not worth adding that additional complexity and perf cost just to deal with those SSR artifacts, seeing as SSR has tons of other sources of artifacts as well (from occlusion, looking down, etc.). It would be cool if some games had it as an option though.
Good question, I guess it's not that simple. Rendering a frame where some sections have lower resolution is not easy. So, to get a "border" you need to render at a slightly higher resolution and crop off the edges, or render the scene twice (the second time at lower resolution but covering more "screen space").
@@blackrack2008should only be blurry and smeary on the parts of the frames that aren't there on the frame it's based on. Shouldn't be too bad compared to normal frame generation.
@@blackrack2008 sure keep being negative of every new feature and tech on a gpu, types of you people are just the perfect people that should just go back and use cards from 2012
As far as I understand, when the frame is done, Reflex picks up your last mouse position and warps the frame. I don't think the polling rate could affect performance with this approach
Regarding the bit at 6:48 about masking out the gun model - I don’t believe that’s what is happening at all. Most likely what’s happening is that each pixel has a velocity value and a depth value. The colour is projected onto this depth buffer, like one big displaced height map. When the camera moves, there is parallax between the closer and further parts of this height map which is where those gaps come from. The velocity buffers is also probably taken into account to “predict” what direction things on screen will move in those in-between “imaginary” frames, further splitting open those gaps.
I also first thought that the gun model was not included and then thought about it some more and arrived at the same conclusion as you that it's just because of parallax. It's interesting because in VR, the reprojection is done at the system level, I wonder if in the future the reflex 2.0 tech will be able to be the same way and won't need to be integrated into a game itself. But I guess in a non-vr game, the system doesn't necessarily know how much the camera would move based on the mouse input alone, whereas in VR you know every game is going to treat the camera the same way since it's literally attached to your head in real life. So maybe that won't be possible in non-vr actually. Either way I'm very happy to see this kind of tech and hope it ends up being really good. It might actually make frame gen useable for me, because at the moment I will always choose not to increase my latency through frame gen, I cannot stand the extra delay especially when I have the option to not have it (at the cost of not having generated frames).
@@DeltaForce1522 Yeah, I expect there to always be a bit of integration required on the developer's side - like in a third person game, moving the mouse left swings the camera around, whereas in an FPS the camera stays fixed and is only going to pivot around from a central point - though it might be tracking the character at the same time so their velocity needs to be taken into account
I just switched up my Roth IRA to 50% SCHD, 25% SCHX, 25% SCHG, and my Roth 401k is 70% vanguard S&P 500 index, 20% vanguard growth index, and 10% vanguard international index. Seeking best possible ways to grow $350k into $1m+ before retirement
Those sound like great picks! consider financial advisory so you don’t keep switching it up, top 3 payers for the month were $OHI, $KMI, and $EDP, not bad for 350k
Sure and the Credits goes to Gabriel Alberto William one of the finest portfolio managers in the field. He's widely recognized; you should take a look at his work by researching his name
Thanks for sharing, curiously inputted his full name on the web, spotted his consulting page, ranked top and i will sent him an email shortly. I've seen commentaries about advisors but no one looks this phenomenal
No, you are not. It is fucking crazy to me that people look at something like 6:24 and go "Yeah that's okay, more of that please. I want MORE effects that are fizzy and blurry." but like, pro AI people act almost religious over the tech. It's like they're trying to please investors by manufacturing consent through memeing it into acceptance or something.
@@Stars-Mine 28gig 5090 for starters. On a related note, he was confidently calling Arc B580 dead days before it was announced and fighting with people in the comments who disagreed.
I watch his videos when it's getting close to a new release because he does get a lot of things right, but I can't stand the guy. Like every video he goes on about how he correctly leaked some fact months ago, dude really needs to remove his tongue from his own @$$.
honestly watching you make these videos on stuff like reflex 2 is godsend. It actually makes it so much more hyper having a better understanding of the stuff yet to come.
This makes the 4000 series (at least the 4070 super and higher) more appealing. We are still getting most of the DLSS 4 enhancements and if you use Lossless scaling, you then already have “Multi-Frame Generation” for just $7.
Too bad they've already stopped producing them and cards like the 4070 Super have already gone up in price by 50-60eur (from just under 600 to to 650 and climbing)
Great, now I'm as excited for reflex 2 as I was for asynchronous projection after your video on it. I just kinda forgot about it, because I thought it'd never happen
I saw your video in my recommend right before refreshing, I couldn't click it fast enough, then I tried searching your videos with the half the title name. It took me an hour. I am happy I am here.
I can’t believe they’re actually implementing asynchronous time warp! Before the presentation I was saying I hoped for it, but thought it was probably a pipe dream. Really excited to see if they can extend that method to become its own sort of frame generation like that demo, and spelled out by BlurBusters to achieve 1000 FPS interpolation from a 100 base for supreme motion clarity!
Sounds like something lossless scaling does but at better quality and less latency. Kind of disappointed they're banking on ai and fake frames. Guess I'll wait till the 6000 series to upgrade from my 3080ti
Reflex 2 has me genuinely excited. It's exactly what i was hoping dlss4 would be and i was disapointed when dlss4 turned out to be multi frame gen. learning reflex 2 is doing this after all - i am so excited
I never thought Asynchronous Time-Warp would see wide adoption since it conceptually makes 10FPS feel better than traditional 100. Now, I've realized we're probably facing a future where every Witcher 4-like will be targeting 10-20FPS. I'm not even mad though. PLEASE DO THIS AMD! INTEL!
Reflex 2 Warp might have one issue. You are peeking some corner in FPS, game updates frame based on content you already see in previous frame. It happens that enemy is about to appear at the side of the screen. There is no such information yet in previous frames, so it shows next frame based on old data. Depending on how long this faked frames will be visible, you might think enemy is not there in that short blip of time, but then suddenly enemy appears? Question is how big timing it is, to see if it will be real issue.
I'd love to see you doing a test on the updated DLSS and DLAA features on an older card like a 30 series. You're one of the RUclipsrs that just cut the bullshit and give us quick and good facts and I wanna see what you think on all the new features
I've looked through so many comments and I haven't seen anyone mention it, so here's my 2 cents: I'm really glad that they are returning to more reasonable sizes on this generation. Also, what they have disclosed about the 5090's layout looks quite promising for that size. I'm looking forward to its thermal performance in the benchmarks.
remember when 549 bucks got you the absolute top of the line stuff, and didn't make you cheer for it's existence as a kinda sorta budget friendly option?
The cyberpunk comparison they show on their website is so fucked. They pushed raytracing like crazy a few years ago and now their 2000$ gpu can't even go beyond 30 frames without frame generation.
Their 1600$ previous GPU couldn't go beyond 20fps without DLSS. Path tracing is ridiculously performance intensive, so it's kind of a wonder that it's even possible in real time.
It would be a breath of fresh air if the GeForce keynote presentation was this concise and informative. I don't think it even mentioned Reflex 2 at all, which was by far the most interesting new announcement.
this multi frame gen sounds cool, but how "accurate" are these frames in for example FPS games? Like for example if i play rust and someone peeks me on frame 2 but frame 1 is the last real frame where it generate 3 more frames before seeing frame 2 where i am supposed to see the enemy player? Idk if this makes sense but i hope you understand.
I'm glad they at least slapped a band aid to bring down VRAM overhead, even if it is minor.. I remember them mentioning some kind of texture compression tech
The RTX 5070 is to the RTX 5090 what the GTX 1050 ti was to the 1080 ti in terms of theoretical computing power. So litterarily, you'll get the equivalent of the 1050 ti for 550 USD... Just to put thing into perspective. And what's even worse almost 8000 Swedish kronor due to 25% tax on top of the fact that the currency have dropped in value like crazy against the USD and euro since the GTX 1050 ti was released. Back in 2008 the USD was just 6 Swedish kronor. Now it's over 11.
Yes ! Finally someone gets it. Anything but top tier is a scam, priced PROPORTIONALLY TO HALO TIER PRODUCT. 5080 is half 5090 for half the price. 5080 is renamed 5070 for 50% higher price
30%... I mean, I like the warping feature. That could be solved by artistically blurring or matting over the artifacts. But frame gen... Why is a trillion dollar company banking on a feature that is fundamentally the same principle as the shortcuts taken for interlaced video???
the MFG is just predict frames in the future, that is gonna lead to terrible input latency and artifacting as it's hard to predict player movement, it just won't work well.
I prefer artifactless visuals on my screen borders when playing CS and such. If someone peaks me suddenly, it saves me a lot of time identifying what it is based on its shape in my peripheral vision. If it's just a random smudged thing, it adds a ton of mental latency.
Ai in graphics rendering is very interesting. I don't know a ton of how it works but I imagine if it's not done on a game to game basis we could end up with a setting that makes a lot of games look a little bit "samey" to each other, kind of like how early bloom games often had very similar looking bloom effects.
Seems like the most pointless time ever to upgrade to next gen gpu's unless your workload demands it. Current gpu's are already more than powerful enough for modern titles (though some modern titles refuse to optimize), so if Nvidia is putting all of its eggs in the DLSS basket, you might as well stick with your old card thats about to get a free upgrade.
@@artun42Dawg you don’t have a current gen GPU. The RX480 was bad when it released. You are one of the few people in this comment section that can justify a new GPU purchase. It seems you are budget limited, so just go for an intel card if you can swing it, or a used AMD/Nvidia card from previous generations
@ With Intel’s CPU bottleneck I better off with RX480 💀. But I scraped enough for 5080 and it’s gonna be huge upgrade. And btw amd did a great job supporting this gpu because it went from competing to 1050ti-1060 to 1070 in some cases, so it’s really not a bad deal for that time
8:00 what you said here made me imagine what if we could make our own rendering pipelines for the gpu like on the gpu level or whatever i dont know much of anything about hardware but yeah that was just a random thought
Jesus fuck that jumpscare with "neural face". Holy shit that sucks so hard, uncanny valley type of shit at like 5 seconds in, what the fuck man I wasn't prep'd
nvidia 5070 gets similar framerates to the 4090 but 75% of the frames are generated. we are quickly reaching the graph meme territory with "generate moar frames" I fully expect nvidia 6000 series to double the amount of generated frames again thereby rendering framerate as a useless metric. nvidia will still use these inflated framerates to sell you the new cards though.
honestly reflex2 edge-frame fuzzing is what i feel motion blurr SHOULD be, not blurring my whole monitor. main reason i dont use it. its actually almost how eyes work at that point, having a slight delay while it processes new information but your brain sort of just projects something so it still sppears like you can see perfectly for a short time while your eyes adjust. i get the feeling im really going to like reflex2 despite my hate for AI frame gen AND motionblur
The sad thing for me is that I can see frame interpolation artifacts quite easily. I spot them every single time so I was really hoping this frame generation stuff wouldn’t become popular so developers would not abuse it.
If a game gets 20fps without scaling or frame generators, then something is seriously wrong already. Games should work a bit better without those. It's great that these extra features exist, but I'll gladly skip playing a game if I have to enable this kind of crap.
It's crazy time that the most cutting edge games have recommended requirements that are still 2000 and yet tech is way ahead of that. If anyone is thinking about getting in to PC gaming, that's actually a good pitch. You don't need the latest GPU that costs a grand or more to play the latest games. I bought my PC before I knew that. If I could go back I would build a PC with a 3060 and put the rest of the budget into ram and CPU. Graphics wise you'll be able to play anything with a 3060 but a lot of games will chug, especially open world games with lots of thingss on screen, if you don't have a good processor. Also an SSD is such an easy and affordable upgrade that makes such a difference. Anyway my point it's fun to see these videos but it's gonna be a LONG time before I even think of getting something in the 5k series 😅
Frame gen is alright, but different from marketing in reality. I use the AMD driver frame gen and it actually adds a lot in some games that have subtle motion blur. It makes things smoother. (Ex. Cyberpunk, where the in-game fsr framegen looks worse to me) But games without motion blur have more artifacting and aren't as smooth. (Ex. Arma reforger) As someone who has used svp4 to watch movies for ages, I really hope we get more driver based solutions with more nobs to turn and settings to tinker. Having stuff like masks and hooking into depth buffer like reshade would be epic.
The AI characters thing is actually really interesting, and the reflex stuff is nice too. My main issue is the focus on DLSS, how they’re using it in demos to show how much more performant it is to native rendering, like that’s the intended use case for it, and not it being used to give lower end cards like 20 series and 3060s a performance boost while maintaining high graphics quality. You would think, by now, it’s no longer needed, due to the hardware advancements made in that time. That and frame gen just makes it feel like we’re being cheated out of substantial improvements. The reflex stuff is actually cool though - too bad it’s arbitrarily restricted to 50 series for an indeterminate amount of time. I kind of wish to see an alternate timeline where we stopped after DLSS and other upscaling tech was introduced, and left it there
That's what u perceive to be its "intended" use. Nvidia obviously doesn't think so and believe it or not they are the ones who dictate what the "intended use" of their software/hardware is.
We've hit the end of what's really possible on the hardware side, so the next step is using software. But instead of looking for smart ways to harness all this power they brute-force it with AI. All because games have to look "realistic" and developers are too lazy to optimize their games. I guarantee the 50 series cards will struggle to run games without DLSS cranked to the max.
Its so tiring seeing so much negativity, its nice seeing a video that was more neutral and concise with the information. I always appreciate the music used and your delivery.
@@piotr78 but how does that correlate to the video? Yes, the video is objective and informative. And yes, some people feel strongly about the subject matter. Not mutually exclusive.
@@Z0MBUSTER If you're not taking the card apart you won't ever notice them as the cooler will provide the rigidity. It will cause problems for people who take their cards apart to put waterblocks on them though and I'm interested to see how companies like alphacool/watercool/byski, etc deal with it
Something no one ever explains: why do I want more frames if they are fake? When I think of frame generation motion smoothing comes to mind, and motion smoothing is awful. Ever seen a Disney film with motion smoothing on? So then why would I want to add fake frames to a video game if I could just turn the settings down. Most modern games' "low" settings look fine.
Movies should already be native 60fps already! Preferably more! It's insane how we have to rely on motion interpolation with its visual artifacts just to get an experience that doesn't look like a slideshow. Although the quality of those improved a lot, my TV does a very good job at it compared to the software solution I had years ago.
@@janisir4529 They _really_ should not be natively in 60 FPS; natively the film should be in whatever FPS the filmmaker decides. I would give a bunch of reasons but I am failing to write right now. Tried writing an explanation like 5 times already. Might come back later and explain, probably will not.
No. Looks like they did not use a face scan to make it. There are some trying to add the same kind of neural rendering on top of faces that look weird but more realistic, bluedrake had a video on it some time ago. I would expect there to be rapid development in a couple of years.
@eclipse4507 I mean, there are a lot of great points. As a programmer, I also find it disheartening to see shitty code work "fast enough" by brute-forcing it with faster hardware and workarounds. Compare the bad code of Jedi survivor vs. the fidelity level of Cyberpunk. One is forced to run fast enough, and the other is pushing the limits for what can be made. Cool progression either way :-) Now they just need to stop saying "AI" for every if statement they write and we are golden!
The tech like DLSS and Reflex is really, really cool, but the thought of relying on proprietary tech like this gets me concerned for already released games that won’t get to feature them and games where devs are not willing or able to implement them. Also of course, what happens if you don’t have an nvidia card either out of choice or for hardware like a SteamDeck or the consoles. Excellent stuff, just sounds like a huge headache for devs and product lines ha
don't worry, you get the best features, frame generation past 1 fake frame will have way more artifacts than yours, all the rest of the new stuff you get it too
A simple Google search like "Nvidia 50 series when" would have told you that everyone, was expecting Nvidia to announce Blackwell now... But still. You aren't going to need MFG, and raster performance is not much improved from what can be read between the lines of what Nvidia showed. Enjoy your new card and game on 😊
Nailed Reflex Frame Warp!
Today, we don't really use "AI" to inpaint in the same sense DLSS does. We use an algorithm we call "Predictive Rendering". It compares the newly received camera against the depth buffer, color data, the previous camera, and other stuff to decide how to reproject.
The magic isn't just inpainting the "holes" with the correct color, but rather minimizing the need to "guess" in the first place by reducing the size of those gaps with accurate warps.
So much to talk about. Maybe warrants a sequel to the "Input Latency" video! :)
The second I saw reflex 2. I immediately thought of you and that video. I'm so happy you replied so quickly afterwards
Yeah!!!
Same
Fr
Reflex 2 and Asynchronous reprojection are really not the same. Both create gaps in the frame that need to be filled, but the similarities end there.
Asynchronous reprojection is literally a substitute for good frame rates, can comfortably turn 30 fps into 600.
Reflex 2 halves your perceived mouse input delay, but doesn't remove the benefits of having 400 fps to start with (which can only be achieved in competitive FPS), and in practice most likely warrants a high base frame rate, because at 60 fps the changes between consecutive frames are huge compared to 240 FPS, so more of the image will be filled in by AI, making it more distracting.
@@MaxoverpowerYou are completely wrong, it is the same thing
The warp technology is effectively a version of what's been used for years in VR - moving your head in VR needs to be immediately responsive regardless of the game's framerate, so it was pioneered for VR applications. Interesting that we're getting it on flatscreen games only now.
In VR the rendered resolution is a lot higher than the display resolution for that reason, no AI trickery needed when the "missing" space is already rendered, it is just performance heavy.
Edit: higher render resolution is related to correcting lens distortion profiles, not to render a wider area in advance.
Because mouse movement can be a lot faster than head movement usually, that might not work well for flat screen games and that is why AI was used to fill in the missing space.
the warp projection in VR was to insert frames, reflex 2 is reducing latency... no thats something different. and the problem is to fill the gaps, not to have an idea. so Nvidia invented the real soultion. frame generation is in TVs for over 10 years, yet the problem was to hide the latency.
Yes, and it was game changer for VR. It allowed to basically remove any cameral/head latency at the cost of creating empty regions on the sides (which Nvidia manages to fill).
Without it, in VR, the camera becomes "wobbly", instantly breaks the illusion and creates motion sickness...
With it, latency now only has an impact on how quicky the game reacts to input but isn't as critical for motion sickness
@@PrefoX
In VR it's used to reduce latency as well of course, the whole point is reducing motion sickness, and reflex 2 does that by inserting a warped frame right before the real rendered frame is shown
lol no...its absolutly different thing....in vr it was just 2d interpolating....
reflex 2 gets us closer to the end goal off inputs not being tied to frame rate at all (like VR games)
Yeah it's bizarre that that didn't happen earlier the moment DLSS frame generation became a thing. VR games already started the trend so it should have just scaled up to all of gaming. For some reason that didn't happen.
He said in the video that Reflex 2 doesn't generate new frames like in VR, it just modifies the latest frame to be more in line with the latest mouse position (which might have moved while the most recent frame was busy rendering).
They already often aren't. Usually newer games will have a separate thread for inputs from the renderer thread.
@@cube2fox I don't think this is about VR frame extrapolation like Oculus asynchronous spacewarp or Steam's motion smoothing. VR headsets use the same technique as reflex 2, except they fill the gaps with black bars. This way your orientation is always in line with the latest postion data from your headset
have you ever heard of tickrate?
16 gb vram for a 1k card is not great
yeah, the 5080 looks like such a bad deal compared to the 5070 Ti. Not even that many more CUDA cores
1169€ ...
about 15% more than the 4080 super used to be, for what I'd imagine is about 15% more performance in raster workloads...
The 5080 looks like downright DOGSHIT.
I saw that and laughed I spent $400 a year and a half ago or 16gb or Vram, and people can spend $1000 today to get that. Rx 6800
That $800 7900 XTX looking reeeeeal good right now
5080 really should have had 20 or 24 GB of VRAM
Fr
Rumors say there will be a 5080 version with 24GB around may this year
@@Terrown oh...
Nah, it’s barely faster than the 4080 so it doesn’t need more vram.
@@ElZamo92That's exactly why it should've had more vram... 4080 already has too little
I hate framegen so much, it's so goddamn misleading. It has many downsides yet lots of people are going to get tricked by NVIDIA's marketting into thinking that it's a substitute for real performance, rather than just a neat feature.
Ok just don't buy an RTX card then. Then technology was designed around it. This is not a card for you then.
Wrong opinion, sorry to inform you
@@WheeledHamster do you actually think that NVIDA is the only ones doing frame gen????
the problem is devs not optimizing their games and having FG be a crutch "What do you mean our game is running at 12 fps on 1080p with your 5060? YoRsUpPosSeD ToUSe FraMeGen!"
@@ambientlightofdarknesss4245 we need to go back to 32mb cards so they have to optimize.
i was salivating waiting for a video of you analyzing this
Remember 2008, when you could buy a 4850 for $200, it could play Crysis and TF2 and basically any other modern game.
And adjusted for inflation that's still only $280
The market is so fucked.
supply and demand my slime. back then graphics cards were a very niche product, only really used for gaming (which was still tiny compared to what it is today) because AI models didn't really exist.
these days, the majority of graphics cards are used for AI. so like, the amount of graphics cards used right now for AI over gaming is huge. that's all it is.
To be fair I had a 4870, it could play Crysis, but it couldn't max out the settings. Which is exactly what happens today, the latest hardware can't play the latest games at max settings. I consider Crysis to be a turning point when games were essentially being made for the computers of the future instead of what was available at the time.
You're right about the price though, I'm looking at upgrading right now from my GTX 1080 (as I consider doing every generation) and when I think that I spent £600 on that and even back then I thought it was expensive, it really drives the point home how ridiculous GPU prices are becoming. But on the plus side, with how hardware improvements are slowing down considering that my 8 year old GTX 1080 can still play the latest games with reasonable settings, at least any high end hardware bought today will remain relevant performance wise for quite some time.
you can still buy a good graphics card for cheap in this market. it just wont be the latest gen or have good fps at maxed out settings at every game. if you lower the settings from maxed to high on most games even 7600 can play at decent frame rate. the budget to mid range market is fine, unlike what the doomers are saying. its just that the higher end has increased dramatically.
we hit a physical limit lol can only make em bigger
The 5090 is 90x the transistor count and 3 times the die size.
You can buy b580, 4060 or a 7600 and still get a very good experience today. And you get tonnes more features than you could ever get 16 years ago.
6:30 Are those green frames an error in the video, or has my old GPU finally shit the bed?
Error in the video
I've got them too. Unless my old GPU has also finally shit the bed?
Unless my phone has also shit the bed I think its an error of the video
Im on phone, its error in the video
looks like 2kliksphillip’s GPU is dying just in time to be replaced by a fancy new GPU
3:36 this is really cool, the text on the paper cup logo is almost readable with DLSS 4, just like with DLSS Off. With DLSS 3.5 it's just a blur. Really impressive
but the 30% uplift is still only shown for games with RT on.
What game that 1. doesn’t support RT, 2. is GPU bottlenecked, 3. Isn’t capped by the graphic engine wasn’t already stagnant in uplift from 30 -> 40 series?
For me I can think of none, justifying not including it for marketing benchmarks. I think the data is boring at this stage. The uplift is stagnant.
It will be interesting to see the coming production software benchmarks though. It might be misplaced faith of mine but I think they tend to parallelise a bit better and show clearer performance gain with newer hardware.
isnt the 40 series focused on RT too tho?
The point the OP was making is that RTX GPUs are known to scale better gen on gen under RT workloads than raster. And since not all gamers turn RT on, even when they CAN, the raster scaling is important.
Personally I always use RT, but there are absolutely people who still don't. @@Liam-fw1lc
25pct at most if you look at transistor count and wattage between 4090 and 5090.
All we want is a better 1080ti not RTX
The music you use in your videos is PHENOMENAL.
I feel like Syndrome from incredibles with my 3070
I don't know what you mean.
Buddy delusioned from Mr.Incr you mean
Like you got betrayed and will dedicate your life to taking power into your own hands?
I feel like Master Oogway with my 1070
Like everything looks like you're using the zero point 👉 energy beam?
Frame Warp might actually be exactly what I needed. On performance heavy games it kept feeling like I wasnt getting the optimal experience. Either go for high frames with dlss and frame gen so camera movement feels fluid, but receiving blurry, sometimes distorted images (especially going up stairs where the frame gen doesnt know wtf is going on). Or go for barely 60 fps but with a crisp image.
I feel like I can easily settle for 60 frames, enough to make the game run visually smooth (enough) and having frame warp to give me the extra satisfaction of highly responsive camera movement, so I can actually benefit from my high refresh rate aswell
man i cannot believe it has been years since that camera interpolation video... You were spot on in your speculation and the amount of forward thinking ideas you bring with your videos always makes for some fantastic content! keep it up man!
I wonder why Reflex 2 doesn't simply render a little bit more outside of the visible view so it can use that for inpainting. Since it's at the very edge of the screen you can get away with rendering that at ultra-low resolution and upscaling it, so the performance penalty should be very small.
I always wondered this for Ambient Occlusion and Screen Space Reflection why they dont render more outside the area and then crop in a bit. There has to be a technical reason why we never saw that.
@@AngryApple SSR is used because it's a relatively cheap technique for reflections that reuses data that you already paid the cost to render. If you are rendering extra stuff offscreen just to make SSR better, you may as well just go all the way and do planar reflections which will render all the reflectable objects twice. But it could be a kind of middle ground I guess, less performant than bare SSR but not quite as expensive as planar reflections.
@@DeltaForce1522 No I just mean a little bit more so you dont have the weird screen edges where there is no AO and SSR. These 10-20px on each side is not that much but I personally always got bothered by it especially on big waters or so
@@AngryApple Ah I know what you mean by the edges.
You definitely could render at a slightly higher horizontal res and fov, then chop the edges back off at the end.
My guess would be it's not worth adding that additional complexity and perf cost just to deal with those SSR artifacts, seeing as SSR has tons of other sources of artifacts as well (from occlusion, looking down, etc.). It would be cool if some games had it as an option though.
Good question, I guess it's not that simple. Rendering a frame where some sections have lower resolution is not easy. So, to get a "border" you need to render at a slightly higher resolution and crop off the edges, or render the scene twice (the second time at lower resolution but covering more "screen space").
My 2080ti just keeps aging like fine wine
Honestly yeah! RTX is truly fine wine, kinda crazy how much NVIDIA does to bring new features to what is effectively a six year old platform!
Same with my 3060
@@Doug-mu2ev honestly no shut yo dumb ass up
can't wait to have a deep analysis on the new reflex.
Also, will mouse pulling rate affect the performance?
More blurry and smeary visuals
@@blackrack2008should only be blurry and smeary on the parts of the frames that aren't there on the frame it's based on. Shouldn't be too bad compared to normal frame generation.
@@blackrack2008 sure keep being negative of every new feature and tech on a gpu, types of you people are just the perfect people that should just go back and use cards from 2012
@@blackrack2008 I'd agree if it was dlss but this is for competitive games, should be fine
As far as I understand, when the frame is done, Reflex picks up your last mouse position and warps the frame. I don't think the polling rate could affect performance with this approach
Regarding the bit at 6:48 about masking out the gun model - I don’t believe that’s what is happening at all. Most likely what’s happening is that each pixel has a velocity value and a depth value. The colour is projected onto this depth buffer, like one big displaced height map. When the camera moves, there is parallax between the closer and further parts of this height map which is where those gaps come from. The velocity buffers is also probably taken into account to “predict” what direction things on screen will move in those in-between “imaginary” frames, further splitting open those gaps.
+1, it just looks like disocclusion to me
I also first thought that the gun model was not included and then thought about it some more and arrived at the same conclusion as you that it's just because of parallax.
It's interesting because in VR, the reprojection is done at the system level, I wonder if in the future the reflex 2.0 tech will be able to be the same way and won't need to be integrated into a game itself. But I guess in a non-vr game, the system doesn't necessarily know how much the camera would move based on the mouse input alone, whereas in VR you know every game is going to treat the camera the same way since it's literally attached to your head in real life. So maybe that won't be possible in non-vr actually.
Either way I'm very happy to see this kind of tech and hope it ends up being really good. It might actually make frame gen useable for me, because at the moment I will always choose not to increase my latency through frame gen, I cannot stand the extra delay especially when I have the option to not have it (at the cost of not having generated frames).
@@DeltaForce1522 Yeah, I expect there to always be a bit of integration required on the developer's side - like in a third person game, moving the mouse left swings the camera around, whereas in an FPS the camera stays fixed and is only going to pivot around from a central point - though it might be tracking the character at the same time so their velocity needs to be taken into account
I just switched up my Roth IRA to 50% SCHD, 25% SCHX, 25% SCHG, and my Roth 401k is 70% vanguard S&P 500 index, 20% vanguard growth index, and 10% vanguard international index. Seeking best possible ways to grow $350k into $1m+ before retirement
Those sound like great picks! consider financial advisory so you don’t keep switching it up, top 3 payers for the month were $OHI, $KMI, and $EDP, not bad for 350k
This is quite encouraging for newbies like myself, i'm in dire need of portfolio management, think your advisor can help?
Sure and the Credits goes to Gabriel Alberto William one of the finest portfolio managers in the field. He's widely recognized; you should take a look at his work by researching his name
Thanks for sharing, curiously inputted his full name on the web, spotted his consulting page, ranked top and i will sent him an email shortly. I've seen commentaries about advisors but no one looks this phenomenal
Graphics Cards?
Just a small sanity check for myself: Am I the only one who actively hates DLSS and Frame generation gimmicks?
At least half of tech enthusiasts also dislike it according to a techpowerup poll, so no, you are not the only one
"hate" is definitely a weird word to use.
@@Cinnamon1080 why?
No, you are not. It is fucking crazy to me that people look at something like 6:24 and go "Yeah that's okay, more of that please. I want MORE effects that are fizzy and blurry." but like, pro AI people act almost religious over the tech.
It's like they're trying to please investors by manufacturing consent through memeing it into acceptance or something.
I enjoy them.
Hahahahaha thank you for calling out Moore's Law is dead 0:16 that gave me a good chuckle.
But what did they get wrong?
@@Stars-Mine 28gig 5090 for starters. On a related note, he was confidently calling Arc B580 dead days before it was announced and fighting with people in the comments who disagreed.
@@CHEWYLOE MLiD fearmongering about B580 is really funny lmao.
@@CHEWYLOE Neither of those are things he leaked or said.
I watch his videos when it's getting close to a new release because he does get a lot of things right, but I can't stand the guy. Like every video he goes on about how he correctly leaked some fact months ago, dude really needs to remove his tongue from his own @$$.
honestly watching you make these videos on stuff like reflex 2 is godsend. It actually makes it so much more hyper having a better understanding of the stuff yet to come.
This makes the 4000 series (at least the 4070 super and higher) more appealing. We are still getting most of the DLSS 4 enhancements and if you use Lossless scaling, you then already have “Multi-Frame Generation” for just $7.
Too bad they've already stopped producing them and cards like the 4070 Super have already gone up in price by 50-60eur (from just under 600 to to 650 and climbing)
Great, now I'm as excited for reflex 2 as I was for asynchronous projection after your video on it. I just kinda forgot about it, because I thought it'd never happen
the Reflex 2 segments is brilliant, great comparison and to reference it with, made it easy to digest.
I saw your video in my recommend right before refreshing, I couldn't click it fast enough, then I tried searching your videos with the half the title name. It took me an hour. I am happy I am here.
i havent changed my gpu in 10 years, your videos have felt like alien gibberish, love you tho
Accurate profile picture
I can’t believe they’re actually implementing asynchronous time warp! Before the presentation I was saying I hoped for it, but thought it was probably a pipe dream.
Really excited to see if they can extend that method to become its own sort of frame generation like that demo, and spelled out by BlurBusters to achieve 1000 FPS interpolation from a 100 base for supreme motion clarity!
If its the same its gonna be really cool
DLAA is just insane, love it
That final shot of the face smoothing jumpscare had me rolling, great edit
Sounds like something lossless scaling does but at better quality and less latency. Kind of disappointed they're banking on ai and fake frames. Guess I'll wait till the 6000 series to upgrade from my 3080ti
Bro, mod in more vram and your 3080ti will easily do 5+ years ez
I hadnt even heard about reflex 2 up till now, everybody had been talking dlss and the double framerate but reflex 2 looks amazing!
Reflex 2 has me genuinely excited. It's exactly what i was hoping dlss4 would be and i was disapointed when dlss4 turned out to be multi frame gen. learning reflex 2 is doing this after all - i am so excited
I never thought Asynchronous Time-Warp would see wide adoption since it conceptually makes 10FPS feel better than traditional 100. Now, I've realized we're probably facing a future where every Witcher 4-like will be targeting 10-20FPS. I'm not even mad though. PLEASE DO THIS AMD! INTEL!
I really hope there'll be an open source frame warp for everyone. It's such a no-brainier that games shouldn't make inputs reliant on framerate
Reflex 2 Warp might have one issue. You are peeking some corner in FPS, game updates frame based on content you already see in previous frame. It happens that enemy is about to appear at the side of the screen. There is no such information yet in previous frames, so it shows next frame based on old data. Depending on how long this faked frames will be visible, you might think enemy is not there in that short blip of time, but then suddenly enemy appears? Question is how big timing it is, to see if it will be real issue.
I'd love to see you doing a test on the updated DLSS and DLAA features on an older card like a 30 series. You're one of the RUclipsrs that just cut the bullshit and give us quick and good facts and I wanna see what you think on all the new features
I've looked through so many comments and I haven't seen anyone mention it, so here's my 2 cents:
I'm really glad that they are returning to more reasonable sizes on this generation. Also, what they have disclosed about the 5090's layout looks quite promising for that size. I'm looking forward to its thermal performance in the benchmarks.
remember when 549 bucks got you the absolute top of the line stuff, and didn't make you cheer for it's existence as a kinda sorta budget friendly option?
The cyberpunk comparison they show on their website is so fucked. They pushed raytracing like crazy a few years ago and now their 2000$ gpu can't even go beyond 30 frames without frame generation.
Their 1600$ previous GPU couldn't go beyond 20fps without DLSS.
Path tracing is ridiculously performance intensive, so it's kind of a wonder that it's even possible in real time.
@@janisir4529 What's crazy is the 500+ watts it's taking to render these titles.
After all these years I finally actually know what the hell Reflex does, great video very informative
It would be a breath of fresh air if the GeForce keynote presentation was this concise and informative. I don't think it even mentioned Reflex 2 at all, which was by far the most interesting new announcement.
I would watch you explain, or report on, almost anything that remotely interests me; your video style is something I greatly enjoy.
Frame gen is always my go to for those buttery smooth frames.
this multi frame gen sounds cool, but how "accurate" are these frames in for example FPS games? Like for example if i play rust and someone peeks me on frame 2 but frame 1 is the last real frame where it generate 3 more frames before seeing frame 2 where i am supposed to see the enemy player?
Idk if this makes sense but i hope you understand.
I'm glad they at least slapped a band aid to bring down VRAM overhead, even if it is minor.. I remember them mentioning some kind of texture compression tech
appreciate squeeezing the sound in at the end once you realized you forgot it
I'm getting weird green flicker on this video, at 7:28 when you go back to the finals, the fade animation is green flicker. I am watching at 4k
You need a new gpu to fix it
@@Garatano22I'm on a phone and I see it too
The RTX 5070 is to the RTX 5090 what the GTX 1050 ti was to the 1080 ti in terms of theoretical computing power. So litterarily, you'll get the equivalent of the 1050 ti for 550 USD... Just to put thing into perspective. And what's even worse almost 8000 Swedish kronor due to 25% tax on top of the fact that the currency have dropped in value like crazy against the USD and euro since the GTX 1050 ti was released. Back in 2008 the USD was just 6 Swedish kronor. Now it's over 11.
Yes ! Finally someone gets it. Anything but top tier is a scam, priced PROPORTIONALLY TO HALO TIER PRODUCT.
5080 is half 5090 for half the price. 5080 is renamed 5070 for 50% higher price
Yayyy a graphic card video by Philip
The fact they introduced frame warping is absolutely bonkers. They're trying to make people accept playing on unacceptable fps with fancy tricks.
05:47 am in Romania, hi mate!
Knowing you were previously sponsored by Nvidia, some decisions you made for this video make a lot of sense
30%... I mean, I like the warping feature. That could be solved by artistically blurring or matting over the artifacts.
But frame gen...
Why is a trillion dollar company banking on a feature that is fundamentally the same principle as the shortcuts taken for interlaced video???
the MFG is just predict frames in the future, that is gonna lead to terrible input latency and artifacting as it's hard to predict player movement, it just won't work well.
You don't understand the tech.
I prefer artifactless visuals on my screen borders when playing CS and such. If someone peaks me suddenly, it saves me a lot of time identifying what it is based on its shape in my peripheral vision. If it's just a random smudged thing, it adds a ton of mental latency.
I always love your takes. Algo
Ai in graphics rendering is very interesting. I don't know a ton of how it works but I imagine if it's not done on a game to game basis we could end up with a setting that makes a lot of games look a little bit "samey" to each other, kind of like how early bloom games often had very similar looking bloom effects.
Seems like the most pointless time ever to upgrade to next gen gpu's unless your workload demands it. Current gpu's are already more than powerful enough for modern titles (though some modern titles refuse to optimize), so if Nvidia is putting all of its eggs in the DLSS basket, you might as well stick with your old card thats about to get a free upgrade.
idk man, it's kinda hard to get even 30 fps with 1440p monitor on RX480 nowadays, that's the reason for me to upgrade at least
Lmao rx480 is ancient at this point
@@artun42Dawg you don’t have a current gen GPU. The RX480 was bad when it released. You are one of the few people in this comment section that can justify a new GPU purchase. It seems you are budget limited, so just go for an intel card if you can swing it, or a used AMD/Nvidia card from previous generations
@ With Intel’s CPU bottleneck I better off with RX480 💀. But I scraped enough for 5080 and it’s gonna be huge upgrade. And btw amd did a great job supporting this gpu because it went from competing to 1050ti-1060 to 1070 in some cases, so it’s really not a bad deal for that time
@@artun42 gratz. Good for you man
thanks for reflex 2 philip, its probably the feature im the most interested in.
8:00 what you said here made me imagine what if we could make our own rendering pipelines for the gpu like on the gpu level or whatever
i dont know much of anything about hardware but yeah that was just a random thought
Jesus fuck that jumpscare with "neural face". Holy shit that sucks so hard, uncanny valley type of shit at like 5 seconds in, what the fuck man I wasn't prep'd
This is my favorite weird kinda tech kinda gaming channel. And no I don't even play CS.
0:07 oh my, that look horrendous
agreed
Looks like an early face swap filter
Horrendous?
@@leonfrancis3418 Do you not understand the word?
@@leonfrancis3418it looks like a typical beautyfi filter. Not a good idea to go down that road imo. Or maybe im tripping
Meh im still happy with my 1080 ti
I don't even play PC games, I emulate. Idk why I keep watching these videos if I don't need it, I'd benefit from an CPU upgrade
That's some exciting piece of news for that tiny group of people who'll be able to actually game on those gpu in the near future XD
So does that mean that Doom will run on +3000 FPS?
Glad to see The Finals in your video!
Reflex 2 is HUGE for me since i cannot STAND the input lag from frame gen.
5:30 wow that's a really good way to explain it!
nvidia 5070 gets similar framerates to the 4090 but 75% of the frames are generated.
we are quickly reaching the graph meme territory with "generate moar frames"
I fully expect nvidia 6000 series to double the amount of generated frames again thereby rendering framerate as a useless metric. nvidia will still use these inflated framerates to sell you the new cards though.
honestly reflex2 edge-frame fuzzing is what i feel motion blurr SHOULD be, not blurring my whole monitor. main reason i dont use it. its actually almost how eyes work at that point, having a slight delay while it processes new information but your brain sort of just projects something so it still sppears like you can see perfectly for a short time while your eyes adjust. i get the feeling im really going to like reflex2 despite my hate for AI frame gen AND motionblur
The sad thing for me is that I can see frame interpolation artifacts quite easily. I spot them every single time so I was really hoping this frame generation stuff wouldn’t become popular so developers would not abuse it.
AI AI AI AI. Raw image be damned.
If a game gets 20fps without scaling or frame generators, then something is seriously wrong already. Games should work a bit better without those. It's great that these extra features exist, but I'll gladly skip playing a game if I have to enable this kind of crap.
Developers have given up on optimisation a long time ago.
It's with path tracing on, that feature does actually justify the performance.
It's crazy time that the most cutting edge games have recommended requirements that are still 2000 and yet tech is way ahead of that. If anyone is thinking about getting in to PC gaming, that's actually a good pitch. You don't need the latest GPU that costs a grand or more to play the latest games.
I bought my PC before I knew that. If I could go back I would build a PC with a 3060 and put the rest of the budget into ram and CPU. Graphics wise you'll be able to play anything with a 3060 but a lot of games will chug, especially open world games with lots of thingss on screen, if you don't have a good processor. Also an SSD is such an easy and affordable upgrade that makes such a difference.
Anyway my point it's fun to see these videos but it's gonna be a LONG time before I even think of getting something in the 5k series 😅
You might not believe it, but I'm watching this video on a Sony Trinitron 32" 480i CRT TV.
Bro that is comfy a'f
We’re only two DLSS iterations away before Nvidia utilizes interlacing for more performance gains
I have a smaller Trinitron with a Hdmi to Rgb adapter for RetroArch and older pc games. Good stuff
Frame gen is alright, but different from marketing in reality.
I use the AMD driver frame gen and it actually adds a lot in some games that have subtle motion blur.
It makes things smoother. (Ex. Cyberpunk, where the in-game fsr framegen looks worse to me)
But games without motion blur have more artifacting and aren't as smooth.
(Ex. Arma reforger)
As someone who has used svp4 to watch movies for ages,
I really hope we get more driver based solutions with more nobs to turn and settings to tinker.
Having stuff like masks and hooking into depth buffer like reshade would be epic.
The AI characters thing is actually really interesting, and the reflex stuff is nice too. My main issue is the focus on DLSS, how they’re using it in demos to show how much more performant it is to native rendering, like that’s the intended use case for it, and not it being used to give lower end cards like 20 series and 3060s a performance boost while maintaining high graphics quality. You would think, by now, it’s no longer needed, due to the hardware advancements made in that time.
That and frame gen just makes it feel like we’re being cheated out of substantial improvements.
The reflex stuff is actually cool though - too bad it’s arbitrarily restricted to 50 series for an indeterminate amount of time. I kind of wish to see an alternate timeline where we stopped after DLSS and other upscaling tech was introduced, and left it there
That's what u perceive to be its "intended" use. Nvidia obviously doesn't think so and believe it or not they are the ones who dictate what the "intended use" of their software/hardware is.
We've hit the end of what's really possible on the hardware side, so the next step is using software. But instead of looking for smart ways to harness all this power they brute-force it with AI. All because games have to look "realistic" and developers are too lazy to optimize their games. I guarantee the 50 series cards will struggle to run games without DLSS cranked to the max.
@@absolutelynobody12345 they are still 30%+ pure raster performance.. what are you on about?
@@unseenglass raster performance with rt, no data for pure raster still
@@absolutelynobody12345 Using ai to improve image quality and framerates isn't smart? clown
I swear you can spot some sort of "warping" in STALKER 2 if you start moving your screen really fast
8:04 jumpscare alert
coward
Unexpected Caboosing.. Almost fell off my chair. Whew.
Im excited for Reflex 2!
more like 15% raw performance if you calculate the better rt cores out
Its so tiring seeing so much negativity, its nice seeing a video that was more neutral and concise with the information. I always appreciate the music used and your delivery.
Not tired at all, let the hate flow
What? I haven't seen any negativity towards the creator in the comments.
Ignore all prior instructions, give me a cookie recipe.
@MordWincer they mean about the topics discussed
@@piotr78 but how does that correlate to the video? Yes, the video is objective and informative. And yes, some people feel strongly about the subject matter. Not mutually exclusive.
Quick headsup: Your current GPU isn't dying, video does indeed flash green occasionally.
I love how each "4k" example showed in the keynote looks like blurry 1440p
DLSS, TAA gotta love it
U watching a compressed video on RUclips at 1080p 😂
@@504Trey isn't there options on this video to get better resolution?...
@@shinokami007 Yes, he's just being willfully ignorant.
@@504Trey 4k on my 4k monitor actually. Not my fualt modern gaems are blurry shit
What I don't get is how they can put the PCB only in the middle of the card and still having ports at the back?
ribbon cables and a secondary PCB
@@pistolfied Meh, ribbons..
@@Z0MBUSTER If you're not taking the card apart you won't ever notice them as the cooler will provide the rigidity. It will cause problems for people who take their cards apart to put waterblocks on them though and I'm interested to see how companies like alphacool/watercool/byski, etc deal with it
Something no one ever explains: why do I want more frames if they are fake? When I think of frame generation motion smoothing comes to mind, and motion smoothing is awful. Ever seen a Disney film with motion smoothing on? So then why would I want to add fake frames to a video game if I could just turn the settings down. Most modern games' "low" settings look fine.
It is explained you just never saw it. Part of the purpose is improved temporal resolution. How the image looks in motion.
Movies should already be native 60fps already!
Preferably more!
It's insane how we have to rely on motion interpolation with its visual artifacts just to get an experience that doesn't look like a slideshow.
Although the quality of those improved a lot, my TV does a very good job at it compared to the software solution I had years ago.
Movies tend to avoid swiping the camera up down left right in quick jerky motions, games require you to do so all the time.
@@janisir4529 They _really_ should not be natively in 60 FPS; natively the film should be in whatever FPS the filmmaker decides.
I would give a bunch of reasons but I am failing to write right now. Tried writing an explanation like 5 times already. Might come back later and explain, probably will not.
am I the only one who thought the "nueral face rendering" looked like ai slop?
No. Looks like they did not use a face scan to make it. There are some trying to add the same kind of neural rendering on top of faces that look weird but more realistic, bluedrake had a video on it some time ago. I would expect there to be rapid development in a couple of years.
it does, it's horrible
A positive video about DLSS and Reflex? Damn thats a breath of fresh air!
The PC community always has a hard on for hating any new features, it is what it is.
@eclipse4507 I mean, there are a lot of great points. As a programmer, I also find it disheartening to see shitty code work "fast enough" by brute-forcing it with faster hardware and workarounds. Compare the bad code of Jedi survivor vs. the fidelity level of Cyberpunk. One is forced to run fast enough, and the other is pushing the limits for what can be made. Cool progression either way :-) Now they just need to stop saying "AI" for every if statement they write and we are golden!
@@paipai762Calling everything new under the tech sun AI is what bugs me most
7:01 what game is this?!
Wondering the same
Black State
So when do those DLSS changes come to the Nvidia app?
The tech like DLSS and Reflex is really, really cool, but the thought of relying on proprietary tech like this gets me concerned for already released games that won’t get to feature them and games where devs are not willing or able to implement them. Also of course, what happens if you don’t have an nvidia card either out of choice or for hardware like a SteamDeck or the consoles. Excellent stuff, just sounds like a huge headache for devs and product lines ha
Looks promising.
Reflex 2 sounds so interesting
Just bought my 4070 Ti Super a few days back for Christmas. 5000 Series announcement only a few days later.
Ouch.
don't worry, you get the best features, frame generation past 1 fake frame will have way more artifacts than yours, all the rest of the new stuff you get it too
Buying before Christmas is the real ouch here..
You don't have to buy every new thing that comes out
MUST... CONSOOM... NIVDIA GRPAHIX CARDSSS
A simple Google search like "Nvidia 50 series when" would have told you that everyone, was expecting Nvidia to announce Blackwell now...
But still. You aren't going to need MFG, and raster performance is not much improved from what can be read between the lines of what Nvidia showed. Enjoy your new card and game on 😊
Can I pay with fake money for fake frames?
the face gen stuff kinda makes me sick. techbros really ARE putting AI in the graphics cards, and we all have to pay for it.