Thanks for watching everyone! If you want access to my project files, consider joining our academy linked in the description! I want to leave this comment here and explain some specifics regarding how this system works. 1. Is it real time? No! I wish it was, and in the coming years (or months) it likely will be. Multiple real-time AI video generation systems have already been prototyped, but currently they run at low FPS and low resolution. For this demo I used Unreal Engine's "movie render" system to create the footage, and my Python script rendered that footage using my post-process technique. 2. Can it run indefinitely? Yes! The technique I've developed can run indefinitely. I could upload two hours of gameplay if I wanted to (although that would take a while to render) and it would render all of it and maintain at least 90% consistency for characters and environments. However, the system I developed isn't perfect. You'll notice that while it is able to keep roughly 90% of the character fidelity intact, every 5-8 seconds or so you'll see tiny details "fluctuate" such as articles of clothing, small details in the face, or otherwise. However, this is still a huge improvement over traditional generative AI models for video/games... and the stability is significantly beyond what you'll see with other models. 3. Can I download this system? I am not releasing this system yet, especially since it requires quite a bit of computer resources to run. However, if you are interested in learning how to develop a system like this yourself... feel free to check out our online academy at projectascension.ai! See you all in the next video!
I think this could be done in a multi GPU setup. One GPU runs the game, whose video is passed to the second GPU, whose job is solely to run the AI model, and the output is passed to the screen. This way you can run higher frame rate.
Probably very soon this in games cause even now sort of this is real time on Smartphones: face filters etc in apss. I get it is much lower fidelity but works on mobile!!!
im still waiting for a game with ultra realistic damage physics like when objects get hit by bullets or different objects crashing into each other. this is the biggest short coming in gaming today.
Teardown has this already, and if you download the structural integrity mod it's even more realistic. Only issue is it's very performance heavy. Also a bit unrelated but if you're interested in physics in software, Blender recently released a cloth fold plugin that uses normal maps to simulate bending folds in clothing
Armor also would be completely different. Especially helmets, that are basically useless at stopping anything more powerful than .45 or 9mm, the only way you can survive a faster round with more energy like 5,56 or 7,62 is if it will ricochet from the surface, but it only happens at certain angles & under certain conditions
Character Artist here. This looks absolutely incredible!! i could pretty much see a generative model trained on data from playing the game at VFX quality settings on a bunch of supercomputers. the data from the trained model is then shipped with the game so players could play at "low -settings" but have a generative layer on top of it that makes it look photorealistic while not sacrificing performance as a result.. absolutely incredible stuff
Hmm... You'd still need the weights from the model, which I can't imagine would be small, then you'd need that to run on your GPU, fast enough for realtime, hmmm...
@@piotrek7633 yeah i saw that it looks cool but the execution is bad because they are using simple tools not meant for it like luma ai and some random ai generation engines
It's important to remember that none of this is possible in real time yet... YET being the operative word. In 10 years from now I totally agree this is likely going to be the norm in video games where the game itself just renders reference data in a format that's most efficient for the AI filtering.
Id say 3-5 years, first game for public will be in 2 years. You should remember that this will require new hardware for gamers to need so studios will be encouraged to make the jump quickly due to pressure from hardware distributors. Like he said first studio is in too make a whole lot of money from hype and nvida lol, game just needs to be decent.
@@Unspairing The industry is too wrapped up in ESG/DEI to want to push boundaries. So definitely not two years, unless an indie studio makes an amazing game like Unrecord that actually forces the rest of the industry to play catch-up. But we are no longer in the age where companies like Crytek made Crysis, which forced hardware companies to evolve and by proxy forced the industry to follow suit.
@@Billy-bc8pk It only takes one game, plus your forgetting that CDPR wants customers faith back and that Asian game companys are on the rise. We might even end up with a actual real culture/economic war with Asia to compete for hearts and minds. Like in the cold war, or the British "Invasion" (music and tv).
@@Unspairing CDPR lost ALL of their Red Engine engineers, and that was the only engine on the market that could run path tracing in an open-world environment with real-time dynamic lighting on modern hardware at 60fps. The UE5 is incapable of doing that without refactoring the backend and overhauling the multithreaded pipeline. CDPR effectively took a HUGE step backward. So no, they will not be doing anything groundbreaking at all, especially since UE5 does not handle open-world games well with a lot of game logic due to a lot of backend bloat. Anyone who believes CDPR will "regain customer faith" is huffing some serious copium; the only way that would work is if they bring back the Red Engine and bring back all of the world-class engineers they lost. But they are too busy doing DEI hiring, so that's out of the question. You are right that a lot of Asian developers and studios are starting to do some good things with the UE5, so if any of them take the time to refactor it for a project that isn't just a visual API showcase for AI frame-processing and post-processing overlays, and it has actual good gameplay at the foundation, then sure, we might. But most games are barely optimised to run UE5 runtimes at 30 consistent, non-interrupted fps, much less 60 non-interrupted fps. I wouldn't expect to see this kind of tech being utilised for AI photorealism in UE5 until a studio prioritises fps optimisation above all else, and then they apply the post-processing API on top of that.
This is interesting and you should pause your studies and focus 100% on this technology. You should do an IP on it first and license it out to bigger studios.
Hey, dude, I didn't know you were actually studying AI and data science in a graduate program. Years ago you had made a video about Godot and you were the among first one to do it if I remember correctly. For some reason it was that video that had something snap in me where I had decided to go forward software engineering. I just stopped doubting myself and stopped giving excuses. Anyways, I wanted to say that after a while, and you probably have seen this comment before, if it wasn't for you, I wouldn't be a software engineer now.
@jonorgames6596 - I'll accept that correction, lol. I'm not sure why, but we Aussies haven't really caught onto that word. In saying that -- i suppose we aren't really known for having proper word pronouncements, word placement (where we place words when speaking), etc. lol 😅 Example: The world says 'That's really nice' when speaking about a nice..car? We Aussies will say: 'It's a beauty, ae' or 'f*n oath that's sick as'
My suggestion for game developers is to not work a lot on the graphics anymore. Build a good gameplay, good UI. Because the textures, shadings, light, water, and small details, will be generated by AI.
wow this is genuinely impressive. this from a guy i know mostly for bookin' around yelling and holding down left mouse button while running at the enemy
Imagine a Reshade effect that layer's AI generation in the post-process. I don't think our hardware is ready for it yet, but you can revive any old game and make it look photorealistic.
the problem is the inconsistency from one frame to the next, I still think the best use for AI right now is assisting in reducing performance overhead for certain things like denoising noisy 1 step raytracing.
@@yesyes-om1po This is probably because it is post-process. DLSS/RT/PT has direct access to a game's code and has done extensive learning to allow for consistency across frames. It would have to be something that can directly access the game code and inject changes. Such as analyzing meshes and increasing the complexity, then caching it for direct access.
5:20 Imagine in 10 years from now, instead of pirates downloading a cracked game, they just download an AI model that was trained on the game. Also, imagine how resource efficient this would be storage wise? Maybe in the future, game devs won't release their games. They'd instead train an AI on their in house game, and only release the AI model.
If that happens, there will be no gaming industry. What company is going to make a game only for it to be stolen by AI. AI is already killing artist jobs in most industries.
@@LancerX916The industry would probably switch to some sort of subscription based model where the incentive to access their server is that they have a curated seed that produces a high quality experience. Each game would have a subscriber side, and an open-source side. The subscriber side would have to offer a very good experience to stay in business however. Personally I don't like that because I dislike subscription, and already dislike not owning our current games.
Before 10 years games will likely be generated realtime using a multimodality/multimedia model. And due to the convergences of training data, the rise of sludge games & the fear of advanced fraud & identity theft, everyone will just privately ask the model they use for all the other purposes in their life to generate their own personalized experiences. Game developers will probably find their self making synthetic data to fine-tune video models & ride the diminishing demand for more training data before the stage when everyone becomes absolute hermits which will likely happen in less than 10 years.
Not even that. Just list some games you like along with some gameplay elements and it'll generate a game for you. Want Valheim but with Egyptian aesthetics and a more robust gardening system? What about Battlefield 4 aesthetic but rebalanced for Helldivers-style PVE gameplay? Dark Souls but featuring Akuma or Dante facing the actual Marvel characters he could realistically beat lore-wise?
Its going to need a massive uplift in compute, and I do mean massive. And I don't know how you get there are a fair price. If the tech can only run on 5k$ workstations, then sales will be small. Feels like we are at the dawn of new computation, but that dawn is going to need massive shake ups in available compute.
Not to mention how the usage of AI is already helping accelerate the death of our planet with how much water it uses up and exhausts power grids. If people use this it will demand even more strain. Not saying the technology isn't impressive, but it's also dangerous
There is going to an insane uplift in CPU power at some point. I watched a few videos a year or two ago. I cant remember the details exactly, but basically, the jump in power is going to be absolutely out of this world compared to what we have right now.
@@SkemeKOS CPU power is at this point, barring changes to AI tech - are a poor mans processor for AI. So, the uplift will need to be massive, NPU additional tech, Or an uplift in already over priced GPUs. Getting back to the core issue - without a tech uplift, and getting it to consumer cost, this will be a barrier to usage (and thus sales).
'been working with 3d graphics for decades now... and I'm convinced this will be the next major shift. There were some examples of AI 'upscaling', over the GTA 6 trailer and it took things to a whole new level... 'kind of wondering if Rockstar will attempt to put this in (drop the resolution+AI upscale+realtime AI filtering (based upon their own training of their own world). LOL - maybe. 'very cool examples of yours [you've gained a new subscriber].
That AI model running counter strike is very interesting. They should do the same with data from reality. They could make simulator of the entire planet eventually. Just need data from humanoid robots or people with sensors.
i'm trying to get such level of realism by myself with metahuman but they still look cartoonish I mean I added custom scanned skin too but still something is off, don't know why skin is so much difficult to replicate accurately but i don't understand what's missing, its like the skin doesnt properly reflect light as it would irl. If u place strategically the lightning then it might work but this implies to have the lights at a low brightness. like what the hell is missing T-T maybe increasing the "oily" effect on skin strategically on parts like forehead, nose, under the eyes might work, not every part of the face reflects the same way
I wonder if they'll start doing "rebuilt for AI remasters" where they take a new game and strip it down to just basic wireframes/colors to give AI direction on what to render. That's gotta be the way to fit this on consumer hardware.
what did you make here? did you build a custom model, or just use Runway/Luma? this looks almost identical to runway, so I’m just a little confused about what you did
I think use cases like DLSS/DLAA, where machine learning is used to apply one specific "layer" of effect, is incredibly interesting. Your demonstration here shows the potential AND the limitation, and anyone who has messed with ML stuff knows how inconsistent it CAN be. Reigning it in is a challenge. The most immediate current use cases I can think of are going to be things like custom trained models that generate different material textures, complex physics simulations, effects that replace the shaders that we use today, post process effects, generative AI that randomizes variations on textures and stuff so there are no repeated textures, runtime texture upscaling... Full frame replacement is very promising, but it's a big ask right now. It's going to be in playable games way sooner than we would have thought, but it's going to be janky for a long time before we can claim to have it under control. I wanna see someone utilize the chaotic fever nightmare aspects of generative AI. I love that stuff.
Absolutely. And I don't know much about 3D engine processing yet I know Ai has been stable in terms of models which can have consistent styles. I love that idea of gaming companies using that chaos like a sort of esthetics and part of the game where that chaos actually makes sense. And I don't know if we have a VR MMORPG yet. Imagine an MMORPG like In the style of Cyberpunk and The surrogates, that delivers on how the Division should have been. And open world like Wow 😲 in VR with similar graphics. That would be amazing. I do think maybe the post processing could metigate the graphics and memory issues with graphics yet I need to learn more about post processing lol. Amazing topics and also I side with the gameplay has to be fun for sure
@joskun imagine a game where you could move between two words: the normal world, and the chaos realm. The chaos realm has an inconsistent and chaotic generative AI post processing effect on it. Stuff like that would be sick.
What amazing about this tech is that this could be implemented in VR and totally render your real self into it as of how you look in real life but you are wondering in an AI created matrix world or it could copy the real world and you can go there for holiday without actually have to be there for real and you interact with real people that are in the VR as you. Imagine Tinder date with someone that is on the other side of the planet to see how it work out in a date before actually meeting up for real.
While graphics are not everything, I don't think such technology would be widely used in the future because of its lack of persistent memory which is a huge deal when it comes to games. Players do a range of camera movements and having the ai post processing to generate a "similar" looking image will break the illusion because the image would be "similar" but not persistently the same, meaning a model, say an apple, having a certain texture would look familiar but not the same in every frame that is generated. It goes against what video games as an art is, which is creative and talented people making a collective vision come to life where everything you see has been deliberately crafted by the hands of artist and programmers. This inconsistency would get worse as we go far away from the players camera, as a mip mapped texture would just become a garbled mess in the distance. Having the video titled "This technology will change video games forever" is an oversight. Players need realism in geometry simulation, better ai npc system, and in general a fun environment. Post Process AI is not a game changer.
Great bro, can't wait for a dementia filter over all my graphics. Can't wait for everything in game to become a variable and non-concrete morphing blob with zero consistency. It'll be like trying to read text in a dream.
I've played with enbs in SkyrimVR and realized how subjective they were. After reading enb author's notes about 'tweaking' settings it became how apparent how much trial and error was involved to produce an acceptable result. It's the perfect job for an AI and its just one of many applications. Great vid.
Even me person who wish to see realistic graphics in games before I'm gone, even I do understand that realistic graphics is a thing for a very limited auditoria. People do care about games. Good games. People do care about art style that was created by people. AI post processing filter means all gamers may experience one game looking if not completely different then slightly different coz AI processing can't be synchronized and predicted. As well as visual artifacts will be here and there. You can see right on Metahuman character face that skin is kinda strange and also hair area skin is white instead of black. All those demos with real games look like a bloody freak show. Nobody needs those realistic bodycam shooters. Those just cool to look at maybe once and that is it. How many people use those GTA5 or Cyberpunk mods that make it more realistic? From what I found it is like from 10k to 60k downloads. Well. Maybe there is more but I don't think so. People are wondering how cool the graphics can be achieved. Yes they are watching those videos with ultra realistic mods but they don't really need it. There are millions of people that continue to play games that looks like PS2 games with better resolution and they are just fine. Blizzard's slaves don't care bout visuals either. Millions are playing anime style games. But only few people need realistic looking game to get away from our realistic reality. Arma3 or Squad players would be happy to run realistic looking something like ArmSquad maybe but wait. How many people are playing Reforger which is like on the way new level compare to ARMA3? 3700 people day peak and 12k all time peak!? Wait what!? Arma3 has 12k 24 hours peak and 57k all time peak? Right! Just few needs that realistic visuals. And why!? Yes! This is the question! Why people don't play way more realistic Arma Reforger!? Because developers are idiots that can't provide gamers with a proper gameplay to their games so there are few people that are ok to entertain themselves. People pay money not for the graphics but for the game! And the game is meant to entertain us! So nah! No AI realistic filter will make millions of money to anybody. Perhaps some AAAAA studios are going to try to add it as a feature in the Setting Menu or maybe they will try to sell it separately to see if gamers want it but I'm 100 percent sure that this is just a money wasting dead born idea.
Congratulations. You've created a way for an indie to make AAAA studio level graphics. Applying it into post processing means not needing as much detail in the models and textures. If you can apply this to your game Operation Harsh Doorstop you'd prove it can be done in a multiplayer game.
I mean anything AI does feel like we won't have it until there is consistency in the image creates and I don't know what it takes to do that. Even then I believe future is voxels for games. I love voxels. I don't think AI would really limit what you can do in game and get consistent results. In the other hand if we go voxel route; we can have destructible environments, we can have physics, we can have fully material based rendering instead of having objects attached some properties and textures.
THIS is exactly what I have been hoping we see these tools and advancements used for!!!! Awesome work dude! Could you use it for artistic aesthetics as well instead of only realistic? Can you improve physics effects in games?
wouldn't surprise me if the tech required to render this in realtime in 10 years is like $100000 per RTX 8090. and WHEN this is possible, people might not even want it anymore :/
I had a similar idea to this using an engine with an easily available depth map that a processor could use, where you'd basically use segmentation + colorID for the models so the AI could use the vertex color of the model to look up details on that model by linking it to a database, so color ID is basically the name of what that thing is and the database contains the prompts for the item for greater accuracy. This is a really cool way you have set it up to run here through reshade as I am a reshader as well! I love it dude
This has been done by Intel a few years ago, they took GTA footage and made it photoreal with the same img2img technology, and yes it's going to become feasible in the coming years.
This is impressive but we are at the point where graphics has reached a threshold where it doesn't hold as much of an importance compared to really creative and unique gameplay mechanics, better feeling ai interactions and world interactions and gameplay loop that is fullfing and satisfying. I want ai to enhance those aspects rather than just improving graphics and visuals which already feels quite good and at points very samey.
So basically your just using unreal pre-rendered videos and putting it through video to video AI . So what was the ai process time. Few hours for a few minutes. Not exactly an unreal shader you actually have developed is it.
I don't like it. Looks weird. I'd love to see a video on making a physically accurate humanoid model in UE5. By that I mean like it has a skeleton, muscles and its "animated" the same way the human body "animates," by controlling muscles. Is it possible?
Not really possible, you got to remember - Games are highly optimized to be realtime, if you would go so far in depth on one system it would hog the performance on everything else. Trivia: Game models already have skeletons - which are placed to have the samo pivot points as a real skeleton. Driving the models animations with muscle movements - is possible but would be very performance heavy and janky due to game physics. AI learning is being used to teach characters to move on their own inside virtual worlds "with physics on", that is kind of a breakthrough that we might see in future games. You can animate muscles very realistically on top of skeletal animation, which is an optimization but would look good. (example Unreal engine muscle simulation),
eh i doubt it. using AI over everything is not a viable solution. dont get me wrong, its gonna be useful as hell, but just applying an AI filter wont fix anything
its totally a viable solution, even if you need to fill an entire 48U rack with GPUs to make it run in realtime, I think its so viable that I'm thinking in literally sell that as service to rich gamers. That's just $150K in hardware (96 GPUs). It'll cost $69 per hour. I bet people will pay for it, just for the novelty. Just do remote gaming, coming to a datacenter near you. I bet in 5 years you are going to have 1 GPU costing $2 per hour doing the same. GPUs can scale horizontally absurdly. That's not even counting the possibility of optimizing the model and baking them into sillicon.
Very impressive! A slight contrast tint to match the surrounding lighting and I think its perfect. Just need to keep the cohesion stable, and this will be a literal game changer!
@Bluedrake42 guess I'll move on. Have fun man, i loved your vids but if this is where it's going im probably gonna tap out, its been a great few years man, much love.
Even not being in real time, that is really nice. Unreal is getting used for animation projects more and more. The current Unreal performance capture can make things feel pretty stiff at times. Being able to touch things up with an AI post process layer might be a great step for a lot of projects to give them that needed layer of style, appeal, or expressiveness.
I always thought it would be cool if soldier combat unis could change color with environment. Those soldiers going down the road with snow as a background stood out big time. Like Ant’s on Wonder Bread. Great share. Phenomenal work sir.
should we have not made factories automated, computers (used to be human computers), the internet (online banking, travel assistance, shopping), moved away from telephone switch boards?... Because people used the same argument in all of those cases too. Where innovation makes one job obsolete, it will create new ones.
Dude! You are on to something extremely powerful. I love this in between system or method that you are considering. Weird, that I thought of this in the abstract as a passing thought a day or two ago but I was thinking of classic 80s and 90s games. Man this 4:18 is so crazy good. Wow
Unreal Engine is rather impressive, but I swear that about a week ago, this was exactly what I was talking about in a comment, regarding next gen graphics. I'm happy to see that someone who actually knows how all of this functions has already started to implement it. I look forward to the future of this post processing technique. Great job!
You can create some truly mind-blowing effects by targeting specific portions or ranges of the RGB spectrum, adding varying degrees of randomness to those levels. This approach can lead to surreal, unexpected results. If you combine knowledge of image processing (think Photoshop filters) with AI that can manipulate entire frames or specific color spaces like RGB, luminance, or even using LAB color space, the creative possibilities are endless. This opens up exciting opportunities for gaming, where the complexity of rendering these effects likely won’t strain the GPU or CPU much. I really admire your innovative approach to this-it’s incredibly exciting!
Nice, Every game ends up being it's own A.I engine in a smaller hard drive foot print. This is literally the most amazing tech I have seen using A.I in Video games. I would love to see every game use this.
at some point are we going to blur the line, between useful and realistic, if it works perfectly in real time. this will be a gamechanger for multiple studies that love to storytell with realistic characters, like rockstar, naughty dog, and a lot of other studios that would love to put work in that direction. this is awesome man. keep up the great work, I hope you capitalize more on the work you are doing to collect these awesome upcoming systems. i really enjoy your channel
This is so going to happen, I think once they get it good, it's actually going to reduce the computational load rather than increase it. MetaHuman models are HUGE and detailed characters can be fairly computationally expensive, once it's just image processing - it can actually be pretty cheap. AI has been getting about 10x efficiency gains in the past years, if this keeps up to any length of time it should be able to run in the 100s of frames per second (10ms per frame). Camera filters are already running at these kinds of frames on PHONES.
There is a sci-fi novel series from the 90's called Otherland, by Tad Williams. The plot revolves around a group of people being stuck in a life-like virtual simulation. Toward the end of the series you learn a bit more about the technology running the virtual universe and conceptually it is eerily similar to this. The actual simulation is ran by conventional computers and is not anything super special. But there is a second layer to the technology that is basically a telepatic organic computer that applies a dream like filters over the user's senses to make them perceive the virtual universe as life-like with each of their sense. They can see, feel smell and hear just as they would in real life. Excellent book series by the way.
This is incredible, and I knew it was just a matter of time. Great work here setting this all up. When we nail down consistency and real time, we are going to be at a new golden age of gaming.
We're doing film pre-vis using UE5 - this would be great to solve the metahuman plastic look. Right now we run our image sequence through stable diffusion to get a similar look - but the temporal consistency isn't as good. Using the post-process could be what we're looking for.
My god ! Awesome also you could in theory train Ai on character animation and camera with controller inputs to simulate 3d cut back on actual 3d effects
I'm most excited for what these kinds of things will do for indie films. It feels like we're already at the point that a two or three man team could use all of Unreal's environment and animation automations to build something that could potentially match a Pixar level animated film.
I do think it would be cool if people had the option in a character creator to just type out what their character looked like or pick from check boxes to help the prompt. This could actually go further in another direction where you can create your own avatar from photos without it looking like a pasted on texture map.
Wait, if the filter works on these models, could you, in theory, use low poly/low spec details to make less demanding models so the Filter does the heavy lifting while the 3d models direct the actions and the script events? I don't imagine ai requires a lot of detail maybe high res head and hands meshes + textures with limited lighting and low res clothing textures etc. Have the ai do the heavy lifting on light interactions.
Yeah, this is amazing. Also looking forward to real time lip sync between players. I could see this being used in any open world, or block buster movie like Raiders of the Lost Ark, StarGate, Star Wars, Star Trek, you name it, sounds super fun!
I wrote an article when the ps3 came out where Sony demoed some very realistic, next level face gestures. It was amazing. You made this? Incredibly good.
This is super great and I want to learn how to use this, been talking about this for a while now, but instead thought of incorporating into a TV or console directly
These are the most accurate human face models i've seen that respond that well to someone's in real life face. I can definitely see this becoming indistinguishable from reality if RUclips won't put a tag over them specifying - A.I. video.
I've been thinking about, I guess this sort of thing but I'm not tech-savvy in the field, but I assumed that eventually the Generative A.I. would allow us to play through old games, and it would update all the textures and animations, I was thinking Everquest (the mmo from 1997) you could give all the npc's A.I. with personality subsets and information using the tons of lore so you could interact with them, but the development A.I. would allow you to speak to it, as a developer, so you could be running through zones, and interacting with npcs and having a conversation with the dev A.I. software to direct/suggest/change/implement etc I feel like it could really bring back alot of otherwise seriously dead games that used to be good but are very aged not just graphically, but mechanics/animations/interactions could all be implemented into the game, by just having a conversation with the A.I. dev kit tools, you could make a suggestion or drag and drop a video of a person doing a back flip or a dragon spitting acid, or a picture of some texture from real life, and poof....into the game it goes.
Thanks for watching everyone! If you want access to my project files, consider joining our academy linked in the description! I want to leave this comment here and explain some specifics regarding how this system works.
1. Is it real time?
No! I wish it was, and in the coming years (or months) it likely will be. Multiple real-time AI video generation systems have already been prototyped, but currently they run at low FPS and low resolution. For this demo I used Unreal Engine's "movie render" system to create the footage, and my Python script rendered that footage using my post-process technique.
2. Can it run indefinitely?
Yes! The technique I've developed can run indefinitely. I could upload two hours of gameplay if I wanted to (although that would take a while to render) and it would render all of it and maintain at least 90% consistency for characters and environments. However, the system I developed isn't perfect. You'll notice that while it is able to keep roughly 90% of the character fidelity intact, every 5-8 seconds or so you'll see tiny details "fluctuate" such as articles of clothing, small details in the face, or otherwise. However, this is still a huge improvement over traditional generative AI models for video/games... and the stability is significantly beyond what you'll see with other models.
3. Can I download this system?
I am not releasing this system yet, especially since it requires quite a bit of computer resources to run. However, if you are interested in learning how to develop a system like this yourself... feel free to check out our online academy at projectascension.ai!
See you all in the next video!
I think this could be done in a multi GPU setup. One GPU runs the game, whose video is passed to the second GPU, whose job is solely to run the AI model, and the output is passed to the screen. This way you can run higher frame rate.
I think Two Minute Papers mentioned realtime video style transfer quite a while back.
Probably very soon this in games cause even now sort of this is real time on Smartphones: face filters etc in apss. I get it is much lower fidelity but works on mobile!!!
will gta6 have this?
@@Microphunktv-jb3kj Probably not. Maybe with some mods on PC later on
"People have 7 fingers instead of 6"
bro what
He is secretly a reptilian
#joke
If you stare long enough into the abyss the abyss stares back at you
Plot twist: Bluedrake has been an AI this entire time... 🤖
ragebait
It will have the biggest impact on VR.
I can't wait!
Finally someone said it 😮
No, no, I don't want generative AI in my headset. That sounds like a literal nightmare.
@@flowstategmng Huh?
Is VR still a thing?
im still waiting for a game with ultra realistic damage physics like when objects get hit by bullets or different objects crashing into each other. this is the biggest short coming in gaming today.
Teardown has this already, and if you download the structural integrity mod it's even more realistic. Only issue is it's very performance heavy. Also a bit unrelated but if you're interested in physics in software, Blender recently released a cloth fold plugin that uses normal maps to simulate bending folds in clothing
Dead Island 2
Sounds cool on paper, but actual experience would be extremely suffocating
Like imagine playing some FPS, get hit by a bullet in the upper leg, and die in 3 minutes, because the bullet ripped your femoral artery
Armor also would be completely different. Especially helmets, that are basically useless at stopping anything more powerful than .45 or 9mm, the only way you can survive a faster round with more energy like 5,56 or 7,62 is if it will ricochet from the surface, but it only happens at certain angles & under certain conditions
Character Artist here. This looks absolutely incredible!! i could pretty much see a generative model trained on data from playing the game at VFX quality settings on a bunch of supercomputers. the data from the trained model is then shipped with the game so players could play at "low -settings" but have a generative layer on top of it that makes it look photorealistic while not sacrificing performance as a result.. absolutely incredible stuff
Hmm... You'd still need the weights from the model, which I can't imagine would be small, then you'd need that to run on your GPU, fast enough for realtime, hmmm...
this will eventually lead to just needing to render something like a wireframe with color codes for materials
maybe but that will have problems when it comes to realistic graphics . but cartoony and simple not so much
@@redarke not if you train the pyramid encoder to know what the materials are from special colors
@@monad_tcp thats why i said maybe and didnt deny it I am excited to see people make improvements to such tech but for now its still not very surfaced
Look at "gta san andreas but its reimagined by ai". It's horrible and will never be used unless its 100% stable, and ai videos are never stable
@@piotrek7633 yeah i saw that it looks cool but the execution is bad because they are using simple tools not meant for it like luma ai and some random ai generation engines
It's important to remember that none of this is possible in real time yet... YET being the operative word. In 10 years from now I totally agree this is likely going to be the norm in video games where the game itself just renders reference data in a format that's most efficient for the AI filtering.
Id say 3-5 years, first game for public will be in 2 years. You should remember that this will require new hardware for gamers to need so studios will be encouraged to make the jump quickly due to pressure from hardware distributors. Like he said first studio is in too make a whole lot of money from hype and nvida lol, game just needs to be decent.
@@Unspairing The industry is too wrapped up in ESG/DEI to want to push boundaries. So definitely not two years, unless an indie studio makes an amazing game like Unrecord that actually forces the rest of the industry to play catch-up. But we are no longer in the age where companies like Crytek made Crysis, which forced hardware companies to evolve and by proxy forced the industry to follow suit.
@@Billy-bc8pk It only takes one game, plus your forgetting that CDPR wants customers faith back and that Asian game companys are on the rise. We might even end up with a actual real culture/economic war with Asia to compete for hearts and minds. Like in the cold war, or the British "Invasion" (music and tv).
@@Unspairing CDPR lost ALL of their Red Engine engineers, and that was the only engine on the market that could run path tracing in an open-world environment with real-time dynamic lighting on modern hardware at 60fps. The UE5 is incapable of doing that without refactoring the backend and overhauling the multithreaded pipeline. CDPR effectively took a HUGE step backward. So no, they will not be doing anything groundbreaking at all, especially since UE5 does not handle open-world games well with a lot of game logic due to a lot of backend bloat. Anyone who believes CDPR will "regain customer faith" is huffing some serious copium; the only way that would work is if they bring back the Red Engine and bring back all of the world-class engineers they lost. But they are too busy doing DEI hiring, so that's out of the question.
You are right that a lot of Asian developers and studios are starting to do some good things with the UE5, so if any of them take the time to refactor it for a project that isn't just a visual API showcase for AI frame-processing and post-processing overlays, and it has actual good gameplay at the foundation, then sure, we might. But most games are barely optimised to run UE5 runtimes at 30 consistent, non-interrupted fps, much less 60 non-interrupted fps. I wouldn't expect to see this kind of tech being utilised for AI photorealism in UE5 until a studio prioritises fps optimisation above all else, and then they apply the post-processing API on top of that.
@@Unspairing two years is a pretty tight deadline for that given development cycles. Probably in some smaller games or simple indie concepts
This is interesting and you should pause your studies and focus 100% on this technology. You should do an IP on it first and license it out to bigger studios.
Hey, dude, I didn't know you were actually studying AI and data science in a graduate program.
Years ago you had made a video about Godot and you were the among first one to do it if I remember correctly. For some reason it was that video that had something snap in me where I had decided to go forward software engineering. I just stopped doubting myself and stopped giving excuses.
Anyways, I wanted to say that after a while, and you probably have seen this comment before, if it wasn't for you, I wouldn't be a software engineer now.
Comments like this make my day. Thank you man.
Chat this is great
This is awesome.
@@Ozzies *wholesome
@jonorgames6596 - I'll accept that correction, lol.
I'm not sure why, but we Aussies haven't really caught onto that word. In saying that -- i suppose we aren't really known for having proper word pronouncements, word placement (where we place words when speaking), etc. lol 😅
Example:
The world says 'That's really nice' when speaking about a nice..car?
We Aussies will say: 'It's a beauty, ae' or 'f*n oath that's sick as'
My suggestion for game developers is to not work a lot on the graphics anymore. Build a good gameplay, good UI. Because the textures, shadings, light, water, and small details, will be generated by AI.
Sure, if you want really generic looking graphics
@@paijwa ik, I dont want every game to look realistic. I like playing games like dbd with their own art style
I would rather have gameplay over graphics
righteously said
I would like both
Gameplay wise we have regressed from what we used to get. I wish we still had the early 2000s devs with today’s technology.
More focus can be spent on gameplay if you don't need to spend as much time on graphics.
*whynotboth.gif*
wow this is genuinely impressive. this from a guy i know mostly for bookin' around yelling and holding down left mouse button while running at the enemy
Imagine a Reshade effect that layer's AI generation in the post-process. I don't think our hardware is ready for it yet, but you can revive any old game and make it look photorealistic.
That would be...massive. Like unbelievably insane. It's definitely not a direction I imagined the industry would go.
the problem is the inconsistency from one frame to the next, I still think the best use for AI right now is assisting in reducing performance overhead for certain things like denoising noisy 1 step raytracing.
@@yesyes-om1po This is probably because it is post-process. DLSS/RT/PT has direct access to a game's code and has done extensive learning to allow for consistency across frames. It would have to be something that can directly access the game code and inject changes. Such as analyzing meshes and increasing the complexity, then caching it for direct access.
Two Minute Papers has videos on that subject. One being "Intel's Video Game Looks Like Reality!". Worth a watch.
and destroy its charm and make everything generic as fuck just because some dummies for reasons they cannot explain crave ''ultra realistic''
5:20 Imagine in 10 years from now, instead of pirates downloading a cracked game, they just download an AI model that was trained on the game.
Also, imagine how resource efficient this would be storage wise? Maybe in the future, game devs won't release their games. They'd instead train an AI on their in house game, and only release the AI model.
If that happens, there will be no gaming industry. What company is going to make a game only for it to be stolen by AI. AI is already killing artist jobs in most industries.
@@LancerX916The industry would probably switch to some sort of subscription based model where the incentive to access their server is that they have a curated seed that produces a high quality experience.
Each game would have a subscriber side, and an open-source side. The subscriber side would have to offer a very good experience to stay in business however.
Personally I don't like that because I dislike subscription, and already dislike not owning our current games.
Before 10 years games will likely be generated realtime using a multimodality/multimedia model. And due to the convergences of training data, the rise of sludge games & the fear of advanced fraud & identity theft, everyone will just privately ask the model they use for all the other purposes in their life to generate their own personalized experiences. Game developers will probably find their self making synthetic data to fine-tune video models & ride the diminishing demand for more training data before the stage when everyone becomes absolute hermits which will likely happen in less than 10 years.
Neural Radiance fields (NeRFs) capture your scene within model weights already.
Not even that. Just list some games you like along with some gameplay elements and it'll generate a game for you. Want Valheim but with Egyptian aesthetics and a more robust gardening system? What about Battlefield 4 aesthetic but rebalanced for Helldivers-style PVE gameplay? Dark Souls but featuring Akuma or Dante facing the actual Marvel characters he could realistically beat lore-wise?
Its going to need a massive uplift in compute, and I do mean massive. And I don't know how you get there are a fair price. If the tech can only run on 5k$ workstations, then sales will be small. Feels like we are at the dawn of new computation, but that dawn is going to need massive shake ups in available compute.
Not to mention how the usage of AI is already helping accelerate the death of our planet with how much water it uses up and exhausts power grids. If people use this it will demand even more strain. Not saying the technology isn't impressive, but it's also dangerous
Haven't you posted this comment somewhere else on RUclips?
Like I don't mean to be ############### but this seems like a déjà vu checkpoint.
There is going to an insane uplift in CPU power at some point. I watched a few videos a year or two ago.
I cant remember the details exactly, but basically, the jump in power is going to be absolutely out of this world compared to what we have right now.
@@SkemeKOS CPU power is at this point, barring changes to AI tech - are a poor mans processor for AI. So, the uplift will need to be massive, NPU additional tech, Or an uplift in already over priced GPUs. Getting back to the core issue - without a tech uplift, and getting it to consumer cost, this will be a barrier to usage (and thus sales).
I does nothing if the game is not fun.
It helps to decentralize the creation of games, so that people will add their own creative spin on combat and stories
Bullshit. technology makes games exciting. Look at real time dynamic light in Doom 3, or physics in HL2. The technology IS the gameplay...
True. If the game is not interesting, than nothing will make it good.
@@MagnitudePerson Tell me you've never played a game release prior to 2008 without telling me you've never played a game release prior to 2008.
Why you act as if you cant have both
Games would be a playabe movie at that point
output is crazy amazing, wow a new era is here
*Cyberpunk 2077 would be insane. The big "problem" there is just this, human realism.*
'been working with 3d graphics for decades now... and I'm convinced this will be the next major shift. There were some examples of AI 'upscaling', over the GTA 6 trailer and it took things to a whole new level... 'kind of wondering if Rockstar will attempt to put this in (drop the resolution+AI upscale+realtime AI filtering (based upon their own training of their own world). LOL - maybe. 'very cool examples of yours [you've gained a new subscriber].
vr is the next step not played flat screeen since getting quest 3 love it.
That is really awesome, Man! Very cool.
That AI model running counter strike is very interesting. They should do the same with data from reality. They could make simulator of the entire planet eventually. Just need data from humanoid robots or people with sensors.
It's that easy, folks.
i'm trying to get such level of realism by myself with metahuman but they still look cartoonish I mean I added custom scanned skin too but still something is off, don't know why skin is so much difficult to replicate accurately but i don't understand what's missing, its like the skin doesnt properly reflect light as it would irl. If u place strategically the lightning then it might work but this implies to have the lights at a low brightness. like what the hell is missing T-T maybe increasing the "oily" effect on skin strategically on parts like forehead, nose, under the eyes might work, not every part of the face reflects the same way
also about this tech in the video, pretty cool but AI tends to put too much "hdr" effect on the characters, light is too strong
With skin there is also subsurface scattering happening, not easy to get right.
I wonder if they'll start doing "rebuilt for AI remasters" where they take a new game and strip it down to just basic wireframes/colors to give AI direction on what to render. That's gotta be the way to fit this on consumer hardware.
And even then the user will need affordable GPUs specifically designed for this. Not happening any time soon.
what did you make here? did you build a custom model, or just use Runway/Luma? this looks almost identical to runway, so I’m just a little confused about what you did
I think use cases like DLSS/DLAA, where machine learning is used to apply one specific "layer" of effect, is incredibly interesting. Your demonstration here shows the potential AND the limitation, and anyone who has messed with ML stuff knows how inconsistent it CAN be. Reigning it in is a challenge.
The most immediate current use cases I can think of are going to be things like custom trained models that generate different material textures, complex physics simulations, effects that replace the shaders that we use today, post process effects, generative AI that randomizes variations on textures and stuff so there are no repeated textures, runtime texture upscaling...
Full frame replacement is very promising, but it's a big ask right now. It's going to be in playable games way sooner than we would have thought, but it's going to be janky for a long time before we can claim to have it under control.
I wanna see someone utilize the chaotic fever nightmare aspects of generative AI. I love that stuff.
Absolutely. And I don't know much about 3D engine processing yet I know Ai has been stable in terms of models which can have consistent styles.
I love that idea of gaming companies using that chaos like a sort of esthetics and part of the game where that chaos actually makes sense.
And I don't know if we have a VR MMORPG yet.
Imagine an MMORPG like In the style of Cyberpunk and The surrogates, that delivers on how the Division should have been. And open world like Wow 😲 in VR with similar graphics.
That would be amazing.
I do think maybe the post processing could metigate the graphics and memory issues with graphics yet I need to learn more about post processing lol.
Amazing topics and also I side with the gameplay has to be fun for sure
@joskun imagine a game where you could move between two words: the normal world, and the chaos realm. The chaos realm has an inconsistent and chaotic generative AI post processing effect on it.
Stuff like that would be sick.
This is a fantastic use of generative Ai. Great work!
What amazing about this tech is that this could be implemented in VR and totally render your real self into it as of how you look in real life but you are wondering in an AI created matrix world or it could copy the real world and you can go there for holiday without actually have to be there for real and you interact with real people that are in the VR as you. Imagine Tinder date with someone that is on the other side of the planet to see how it work out in a date before actually meeting up for real.
probably will be able to make movies for u, just buy a program and tell it what kind of movie u want to watch and it generates it for u.
While graphics are not everything, I don't think such technology would be widely used in the future because of its lack of persistent memory which is a huge deal when it comes to games. Players do a range of camera movements and having the ai post processing to generate a "similar" looking image will break the illusion because the image would be "similar" but not persistently the same, meaning a model, say an apple, having a certain texture would look familiar but not the same in every frame that is generated. It goes against what video games as an art is, which is creative and talented people making a collective vision come to life where everything you see has been deliberately crafted by the hands of artist and programmers. This inconsistency would get worse as we go far away from the players camera, as a mip mapped texture would just become a garbled mess in the distance. Having the video titled "This technology will change video games forever" is an oversight. Players need realism in geometry simulation, better ai npc system, and in general a fun environment. Post Process AI is not a game changer.
2:16 I'm sorry wut
Squadron 42 with this filter would be so cool.
Great bro, can't wait for a dementia filter over all my graphics. Can't wait for everything in game to become a variable and non-concrete morphing blob with zero consistency. It'll be like trying to read text in a dream.
This needs way more views way more views this was absolutely incredible
I don’t know man… it’s cool and all but also very scary.
I've played with enbs in SkyrimVR and realized how subjective they were. After reading enb author's notes about 'tweaking' settings it became how apparent how much trial and error was involved to produce an acceptable result.
It's the perfect job for an AI and its just one of many applications.
Great vid.
Even me person who wish to see realistic graphics in games before I'm gone, even I do understand that realistic graphics is a thing for a very limited auditoria. People do care about games. Good games. People do care about art style that was created by people. AI post processing filter means all gamers may experience one game looking if not completely different then slightly different coz AI processing can't be synchronized and predicted. As well as visual artifacts will be here and there. You can see right on Metahuman character face that skin is kinda strange and also hair area skin is white instead of black. All those demos with real games look like a bloody freak show. Nobody needs those realistic bodycam shooters. Those just cool to look at maybe once and that is it. How many people use those GTA5 or Cyberpunk mods that make it more realistic? From what I found it is like from 10k to 60k downloads. Well. Maybe there is more but I don't think so. People are wondering how cool the graphics can be achieved. Yes they are watching those videos with ultra realistic mods but they don't really need it. There are millions of people that continue to play games that looks like PS2 games with better resolution and they are just fine. Blizzard's slaves don't care bout visuals either. Millions are playing anime style games. But only few people need realistic looking game to get away from our realistic reality. Arma3 or Squad players would be happy to run realistic looking something like ArmSquad maybe but wait. How many people are playing Reforger which is like on the way new level compare to ARMA3? 3700 people day peak and 12k all time peak!? Wait what!? Arma3 has 12k 24 hours peak and 57k all time peak? Right! Just few needs that realistic visuals. And why!? Yes! This is the question! Why people don't play way more realistic Arma Reforger!? Because developers are idiots that can't provide gamers with a proper gameplay to their games so there are few people that are ok to entertain themselves. People pay money not for the graphics but for the game! And the game is meant to entertain us!
So nah! No AI realistic filter will make millions of money to anybody. Perhaps some AAAAA studios are going to try to add it as a feature in the Setting Menu or maybe they will try to sell it separately to see if gamers want it but I'm 100 percent sure that this is just a money wasting dead born idea.
This is really insane! Congrats!
Congratulations. You've created a way for an indie to make AAAA studio level graphics. Applying it into post processing means not needing as much detail in the models and textures.
If you can apply this to your game Operation Harsh Doorstop you'd prove it can be done in a multiplayer game.
A video game concept: 'Relationship Dynamics' - Video game with people meeting other people and developing friends and lovers.
OK but they already made Life by You, then canceled it instead of releasing it
I mean anything AI does feel like we won't have it until there is consistency in the image creates and I don't know what it takes to do that. Even then I believe future is voxels for games. I love voxels. I don't think AI would really limit what you can do in game and get consistent results. In the other hand if we go voxel route; we can have destructible environments, we can have physics, we can have fully material based rendering instead of having objects attached some properties and textures.
THIS is exactly what I have been hoping we see these tools and advancements used for!!!! Awesome work dude! Could you use it for artistic aesthetics as well instead of only realistic? Can you improve physics effects in games?
wouldn't surprise me if the tech required to render this in realtime in 10 years is like $100000 per RTX 8090.
and WHEN this is possible, people might not even want it anymore :/
You promise gold but we need to pay to get Access to something that we don't know is a scam or not.
Seems like this would introduce a lot of lag into the game.
Yep.
All post-processing introduces lag, even reshade. But if it's baked into the engine then it can be mitigated if not completely eliminated.
@@RedactedBrainwaves2 FACTS!!!! but hardware is getting better and better
Gta 6 AI realism mods are gonna be wild in a few years
This is insane, GTA 5 would be wild like this lmao
We may be applying these filters to any game in the near future...
I had a similar idea to this using an engine with an easily available depth map that a processor could use, where you'd basically use segmentation + colorID for the models so the AI could use the vertex color of the model to look up details on that model by linking it to a database, so color ID is basically the name of what that thing is and the database contains the prompts for the item for greater accuracy.
This is a really cool way you have set it up to run here through reshade as I am a reshader as well! I love it dude
This has been done by Intel a few years ago, they took GTA footage and made it photoreal with the same img2img technology, and yes it's going to become feasible in the coming years.
3 years ago to be precise. "ISL and Collaborators" "Enhancing Photorealism Enhancement".
That is absolutely insane dude! Imagine that in a VR game, that would blow people's mind.
This is impressive but we are at the point where graphics has reached a threshold where it doesn't hold as much of an importance compared to really creative and unique gameplay mechanics, better feeling ai interactions and world interactions and gameplay loop that is fullfing and satisfying. I want ai to enhance those aspects rather than just improving graphics and visuals which already feels quite good and at points very samey.
I Loved that video!!! Could you make more video about those new tecnology! I cant help watchng a lot of these videos in RUclips 😊
So basically your just using unreal pre-rendered videos and putting it through video to video AI . So what was the ai process time. Few hours for a few minutes. Not exactly an unreal shader you actually have developed is it.
I was thinking the exact same thing, it’s clearly Runway ML
I think you both missed the point. "This technology will change games forever" is not the same as "insane way to get better graphics in UE5"
you're not the first one I've seen doing this, but I'm glad the technique is being used more commonly these days
I don't like it. Looks weird.
I'd love to see a video on making a physically accurate humanoid model in UE5. By that I mean like it has a skeleton, muscles and its "animated" the same way the human body "animates," by controlling muscles. Is it possible?
Not really possible, you got to remember - Games are highly optimized to be realtime, if you would go so far in depth on one system it would hog the performance on everything else.
Trivia:
Game models already have skeletons - which are placed to have the samo pivot points as a real skeleton.
Driving the models animations with muscle movements - is possible but would be very performance heavy and janky due to game physics.
AI learning is being used to teach characters to move on their own inside virtual worlds "with physics on", that is kind of a breakthrough that we might see in future games.
You can animate muscles very realistically on top of skeletal animation, which is an optimization but would look good. (example Unreal engine muscle simulation),
What an insane video. Thank you for sharing, and demonstrating.
eh i doubt it. using AI over everything is not a viable solution. dont get me wrong, its gonna be useful as hell, but just applying an AI filter wont fix anything
Remember when people said DLSS was a fad? Ahh... Good times.
DLSS 3.7 already accounts for over 70% of total pixels as AI generated
its totally a viable solution, even if you need to fill an entire 48U rack with GPUs to make it run in realtime, I think its so viable that I'm thinking in literally sell that as service to rich gamers.
That's just $150K in hardware (96 GPUs). It'll cost $69 per hour. I bet people will pay for it, just for the novelty. Just do remote gaming, coming to a datacenter near you.
I bet in 5 years you are going to have 1 GPU costing $2 per hour doing the same.
GPUs can scale horizontally absurdly.
That's not even counting the possibility of optimizing the model and baking them into sillicon.
Very impressive! A slight contrast tint to match the surrounding lighting and I think its perfect. Just need to keep the cohesion stable, and this will be a literal game changer!
I won't ever play any game that includes generative ai processing involved, it's the destroyer of innovation.
Well you’re not gonna have many games to play here soon then.
@Bluedrake42 guess I'll move on. Have fun man, i loved your vids but if this is where it's going im probably gonna tap out, its been a great few years man, much love.
@@RyleyStorm I think I'm with you on that one.
Amazing! cannot wait.. all character in Call of Duty will be like this!
Oh you starting a whole new Shitstorm now😂
Even not being in real time, that is really nice. Unreal is getting used for animation projects more and more. The current Unreal performance capture can make things feel pretty stiff at times. Being able to touch things up with an AI post process layer might be a great step for a lot of projects to give them that needed layer of style, appeal, or expressiveness.
I bet you could do a generative AI layer… and then RE-motion capture off the AI output lol
ugh
I always thought it would be cool if soldier combat unis could change color with environment. Those soldiers going down the road with snow as a background stood out big time. Like Ant’s on Wonder Bread. Great share. Phenomenal work sir.
The amount of jobs that are going to be lost with this is insane, what an awful future. 😥
should we have not made factories automated, computers (used to be human computers), the internet (online banking, travel assistance, shopping), moved away from telephone switch boards?... Because people used the same argument in all of those cases too. Where innovation makes one job obsolete, it will create new ones.
Good
I don’t think so. I think there will be more independent developers now.
THANK YOU THAT IS AMAZING!!!!!!!!!!!!
Had to Subscribe, pretty much was Mandatory by the End.
Dude! You are on to something extremely powerful. I love this in between system or method that you are considering. Weird, that I thought of this in the abstract as a passing thought a day or two ago but I was thinking of classic 80s and 90s games. Man this 4:18 is so crazy good. Wow
Unreal Engine is rather impressive, but I swear that about a week ago, this was exactly what I was talking about in a comment, regarding next gen graphics. I'm happy to see that someone who actually knows how all of this functions has already started to implement it. I look forward to the future of this post processing technique. Great job!
You can create some truly mind-blowing effects by targeting specific portions or ranges of the RGB spectrum, adding varying degrees of randomness to those levels. This approach can lead to surreal, unexpected results. If you combine knowledge of image processing (think Photoshop filters) with AI that can manipulate entire frames or specific color spaces like RGB, luminance, or even using LAB color space, the creative possibilities are endless. This opens up exciting opportunities for gaming, where the complexity of rendering these effects likely won’t strain the GPU or CPU much. I really admire your innovative approach to this-it’s incredibly exciting!
The idea of AI picture processing per frame came to me when I first read about DLSS. Glad to see this happening!
Did you use a particular model that is open to the public? If so, which one did you use in your example?
I can wait 3 more years longer for GTA 6 if they applied this Post Processing into their engine.
This is going to be Insane 🤯 can't wait for it. It is a dream come true.
Amazing. I love this and can't wait for all the amazing AI fun that's coming 😊😊
Nice, Every game ends up being it's own A.I engine in a smaller hard drive foot print. This is literally the most amazing tech I have seen using A.I in Video games. I would love to see every game use this.
at some point are we going to blur the line, between useful and realistic, if it works perfectly in real time. this will be a gamechanger for multiple studies that love to storytell with realistic characters, like rockstar, naughty dog, and a lot of other studios that would love to put work in that direction. this is awesome man. keep up the great work, I hope you capitalize more on the work you are doing to collect these awesome upcoming systems. i really enjoy your channel
This is so going to happen, I think once they get it good, it's actually going to reduce the computational load rather than increase it. MetaHuman models are HUGE and detailed characters can be fairly computationally expensive, once it's just image processing - it can actually be pretty cheap. AI has been getting about 10x efficiency gains in the past years, if this keeps up to any length of time it should be able to run in the 100s of frames per second (10ms per frame). Camera filters are already running at these kinds of frames on PHONES.
There is a sci-fi novel series from the 90's called Otherland, by Tad Williams. The plot revolves around a group of people being stuck in a life-like virtual simulation. Toward the end of the series you learn a bit more about the technology running the virtual universe and conceptually it is eerily similar to this. The actual simulation is ran by conventional computers and is not anything super special. But there is a second layer to the technology that is basically a telepatic organic computer that applies a dream like filters over the user's senses to make them perceive the virtual universe as life-like with each of their sense. They can see, feel smell and hear just as they would in real life. Excellent book series by the way.
This is incredible, and I knew it was just a matter of time. Great work here setting this all up. When we nail down consistency and real time, we are going to be at a new golden age of gaming.
Unreal 1 with this system would be mind blowing, but we haven't seen how well the system actually manages 3d environments, just people
If you have a beautiful game, with realistic graphics and all the bells and whistles then congratulations, you have a tech demo.
No matter how advanced graphics get I can still tell the difference.
We're doing film pre-vis using UE5 - this would be great to solve the metahuman plastic look. Right now we run our image sequence through stable diffusion to get a similar look - but the temporal consistency isn't as good. Using the post-process could be what we're looking for.
This is just silly, I never thought this would be possible and now it blows my mind. Its incredible!!! ❤❤❤❤
My god ! Awesome also you could in theory train Ai on character animation and camera with controller inputs to simulate 3d cut back on actual 3d effects
Did the filter just add lighting where there wasnt any or did you add it to the pipeline?
I'm most excited for what these kinds of things will do for indie films. It feels like we're already at the point that a two or three man team could use all of Unreal's environment and animation automations to build something that could potentially match a Pixar level animated film.
As always, a fascinating look into gaming. I hope CIG gets in touch! o7
I do think it would be cool if people had the option in a character creator to just type out what their character looked like or pick from check boxes to help the prompt. This could actually go further in another direction where you can create your own avatar from photos without it looking like a pasted on texture map.
Wait, if the filter works on these models, could you, in theory, use low poly/low spec details to make less demanding models so the Filter does the heavy lifting while the 3d models direct the actions and the script events? I don't imagine ai requires a lot of detail maybe high res head and hands meshes + textures with limited lighting and low res clothing textures etc. Have the ai do the heavy lifting on light interactions.
Yeah, this is amazing. Also looking forward to real time lip sync between players. I could see this being used in any open world, or block buster movie like Raiders of the Lost Ark, StarGate, Star Wars, Star Trek, you name it, sounds super fun!
I wrote an article when the ps3 came out where Sony demoed some very realistic, next level face gestures. It was amazing. You made this? Incredibly good.
Imagine reliving OLD video games using this tech... Like Spyro! Or Crash Bandicoot
How do you keep the frames consistant? Is this a fine tuned sd model or something else?
Impressive. Also, rather frightening.
This is super great and I want to learn how to use this, been talking about this for a while now, but instead thought of incorporating into a TV or console directly
These are the most accurate human face models i've seen that respond that well to someone's in real life face.
I can definitely see this becoming indistinguishable from reality if RUclips won't put a tag over them specifying - A.I. video.
The best of this is it could be applied to ALL games in existence.
I've been thinking about, I guess this sort of thing but I'm not tech-savvy in the field, but I assumed that eventually the Generative A.I. would allow us to play through old games, and it would update all the textures and animations, I was thinking Everquest (the mmo from 1997) you could give all the npc's A.I. with personality subsets and information using the tons of lore so you could interact with them,
but the development A.I. would allow you to speak to it, as a developer, so you could be running through zones, and interacting with npcs and having a conversation with the dev A.I. software to direct/suggest/change/implement etc
I feel like it could really bring back alot of otherwise seriously dead games that used to be good but are very aged not just graphically, but mechanics/animations/interactions could all be implemented into the game, by just having a conversation with the A.I. dev kit tools, you could make a suggestion or drag and drop a video of a person doing a back flip or a dragon spitting acid, or a picture of some texture from real life, and poof....into the game it goes.