This technology will change video games forever...

Поделиться
HTML-код
  • Опубликовано: 27 ноя 2024

Комментарии • 874

  • @Bluedrake42
    @Bluedrake42  Месяц назад +75

    Thanks for watching everyone! If you want access to my project files, consider joining our academy linked in the description! I want to leave this comment here and explain some specifics regarding how this system works.
    1. Is it real time?
    No! I wish it was, and in the coming years (or months) it likely will be. Multiple real-time AI video generation systems have already been prototyped, but currently they run at low FPS and low resolution. For this demo I used Unreal Engine's "movie render" system to create the footage, and my Python script rendered that footage using my post-process technique.
    2. Can it run indefinitely?
    Yes! The technique I've developed can run indefinitely. I could upload two hours of gameplay if I wanted to (although that would take a while to render) and it would render all of it and maintain at least 90% consistency for characters and environments. However, the system I developed isn't perfect. You'll notice that while it is able to keep roughly 90% of the character fidelity intact, every 5-8 seconds or so you'll see tiny details "fluctuate" such as articles of clothing, small details in the face, or otherwise. However, this is still a huge improvement over traditional generative AI models for video/games... and the stability is significantly beyond what you'll see with other models.
    3. Can I download this system?
    I am not releasing this system yet, especially since it requires quite a bit of computer resources to run. However, if you are interested in learning how to develop a system like this yourself... feel free to check out our online academy at projectascension.ai!
    See you all in the next video!

    • @jollygoodfellow3957
      @jollygoodfellow3957 29 дней назад

      I think this could be done in a multi GPU setup. One GPU runs the game, whose video is passed to the second GPU, whose job is solely to run the AI model, and the output is passed to the screen. This way you can run higher frame rate.

    • @Leto2ndAtreides
      @Leto2ndAtreides 29 дней назад

      I think Two Minute Papers mentioned realtime video style transfer quite a while back.

    • @NextgenPL
      @NextgenPL 29 дней назад

      Probably very soon this in games cause even now sort of this is real time on Smartphones: face filters etc in apss. I get it is much lower fidelity but works on mobile!!!

    • @Microphunktv-jb3kj
      @Microphunktv-jb3kj 28 дней назад

      will gta6 have this?

    • @NextgenPL
      @NextgenPL 28 дней назад

      @@Microphunktv-jb3kj Probably not. Maybe with some mods on PC later on

  • @VladimirChibuckov
    @VladimirChibuckov Месяц назад +455

    "People have 7 fingers instead of 6"
    bro what

    • @karaczan5839
      @karaczan5839 Месяц назад

      He is secretly a reptilian

    • @Bluedrake42
      @Bluedrake42  Месяц назад +103

      #joke

    • @TechnoMinarchist
      @TechnoMinarchist Месяц назад +56

      If you stare long enough into the abyss the abyss stares back at you

    • @EagleTopGaming
      @EagleTopGaming Месяц назад +32

      Plot twist: Bluedrake has been an AI this entire time... 🤖

    • @blobooger
      @blobooger Месяц назад +6

      ragebait

  • @manny2ndamendment246
    @manny2ndamendment246 Месяц назад +408

    im still waiting for a game with ultra realistic damage physics like when objects get hit by bullets or different objects crashing into each other. this is the biggest short coming in gaming today.

    • @irecordwithaphone1856
      @irecordwithaphone1856 Месяц назад

      Teardown has this already, and if you download the structural integrity mod it's even more realistic. Only issue is it's very performance heavy. Also a bit unrelated but if you're interested in physics in software, Blender recently released a cloth fold plugin that uses normal maps to simulate bending folds in clothing

    • @unafraidninjagaming7933
      @unafraidninjagaming7933 Месяц назад +16

      Dead Island 2

    • @missing_RF
      @missing_RF Месяц назад +26

      Sounds cool on paper, but actual experience would be extremely suffocating

    • @missing_RF
      @missing_RF Месяц назад +36

      Like imagine playing some FPS, get hit by a bullet in the upper leg, and die in 3 minutes, because the bullet ripped your femoral artery

    • @missing_RF
      @missing_RF Месяц назад +12

      Armor also would be completely different. Especially helmets, that are basically useless at stopping anything more powerful than .45 or 9mm, the only way you can survive a faster round with more energy like 5,56 or 7,62 is if it will ricochet from the surface, but it only happens at certain angles & under certain conditions

  • @discorabbit
    @discorabbit Месяц назад +89

    It will have the biggest impact on VR.

    • @Ozzies
      @Ozzies 25 дней назад +5

      I can't wait!

    • @ZackLee
      @ZackLee 21 день назад +8

      Finally someone said it 😮

    • @flowstategmng
      @flowstategmng 17 дней назад

      No, no, I don't want generative AI in my headset. That sounds like a literal nightmare.

  • @Nobody-Nowhere
    @Nobody-Nowhere Месяц назад +166

    this will eventually lead to just needing to render something like a wireframe with color codes for materials

    • @redarke
      @redarke Месяц назад +3

      maybe but that will have problems when it comes to realistic graphics . but cartoony and simple not so much

    • @monad_tcp
      @monad_tcp Месяц назад +8

      @@redarke not if you train the pyramid encoder to know what the materials are from special colors

    • @redarke
      @redarke Месяц назад +1

      @@monad_tcp thats why i said maybe and didnt deny it I am excited to see people make improvements to such tech but for now its still not very surfaced

    • @piotrek7633
      @piotrek7633 Месяц назад

      Look at "gta san andreas but its reimagined by ai". It's horrible and will never be used unless its 100% stable, and ai videos are never stable

    • @redarke
      @redarke Месяц назад +4

      @@piotrek7633 yeah i saw that it looks cool but the execution is bad because they are using simple tools not meant for it like luma ai and some random ai generation engines

  • @cptairwolf
    @cptairwolf Месяц назад +101

    It's important to remember that none of this is possible in real time yet... YET being the operative word. In 10 years from now I totally agree this is likely going to be the norm in video games where the game itself just renders reference data in a format that's most efficient for the AI filtering.

    • @Unspairing
      @Unspairing Месяц назад +13

      Id say 3-5 years, first game for public will be in 2 years. You should remember that this will require new hardware for gamers to need so studios will be encouraged to make the jump quickly due to pressure from hardware distributors. Like he said first studio is in too make a whole lot of money from hype and nvida lol, game just needs to be decent.

    • @Billy-bc8pk
      @Billy-bc8pk Месяц назад +11

      @@Unspairing The industry is too wrapped up in ESG/DEI to want to push boundaries. So definitely not two years, unless an indie studio makes an amazing game like Unrecord that actually forces the rest of the industry to play catch-up. But we are no longer in the age where companies like Crytek made Crysis, which forced hardware companies to evolve and by proxy forced the industry to follow suit.

    • @Unspairing
      @Unspairing Месяц назад +1

      @@Billy-bc8pk It only takes one game, plus your forgetting that CDPR wants customers faith back and that Asian game companys are on the rise. We might even end up with a actual real culture/economic war with Asia to compete for hearts and minds. Like in the cold war, or the British "Invasion" (music and tv).

    • @Billy-bc8pk
      @Billy-bc8pk Месяц назад +2

      @@Unspairing CDPR lost ALL of their Red Engine engineers, and that was the only engine on the market that could run path tracing in an open-world environment with real-time dynamic lighting on modern hardware at 60fps. The UE5 is incapable of doing that without refactoring the backend and overhauling the multithreaded pipeline. CDPR effectively took a HUGE step backward. So no, they will not be doing anything groundbreaking at all, especially since UE5 does not handle open-world games well with a lot of game logic due to a lot of backend bloat. Anyone who believes CDPR will "regain customer faith" is huffing some serious copium; the only way that would work is if they bring back the Red Engine and bring back all of the world-class engineers they lost. But they are too busy doing DEI hiring, so that's out of the question.
      You are right that a lot of Asian developers and studios are starting to do some good things with the UE5, so if any of them take the time to refactor it for a project that isn't just a visual API showcase for AI frame-processing and post-processing overlays, and it has actual good gameplay at the foundation, then sure, we might. But most games are barely optimised to run UE5 runtimes at 30 consistent, non-interrupted fps, much less 60 non-interrupted fps. I wouldn't expect to see this kind of tech being utilised for AI photorealism in UE5 until a studio prioritises fps optimisation above all else, and then they apply the post-processing API on top of that.

    • @ChristopherCopeland
      @ChristopherCopeland 29 дней назад +1

      @@Unspairing two years is a pretty tight deadline for that given development cycles. Probably in some smaller games or simple indie concepts

  • @Hamstimusprime
    @Hamstimusprime 19 дней назад +13

    Character Artist here. This looks absolutely incredible!! i could pretty much see a generative model trained on data from playing the game at VFX quality settings on a bunch of supercomputers. the data from the trained model is then shipped with the game so players could play at "low -settings" but have a generative layer on top of it that makes it look photorealistic while not sacrificing performance as a result.. absolutely incredible stuff

    • @yesyes-om1po
      @yesyes-om1po 13 дней назад

      Hmm... You'd still need the weights from the model, which I can't imagine would be small, then you'd need that to run on your GPU, fast enough for realtime, hmmm...

  • @paradoxic1888
    @paradoxic1888 Месяц назад +30

    Imagine a Reshade effect that layer's AI generation in the post-process. I don't think our hardware is ready for it yet, but you can revive any old game and make it look photorealistic.

    • @RedactedBrainwaves2
      @RedactedBrainwaves2 27 дней назад +4

      That would be...massive. Like unbelievably insane. It's definitely not a direction I imagined the industry would go.

    • @yesyes-om1po
      @yesyes-om1po 13 дней назад +1

      the problem is the inconsistency from one frame to the next, I still think the best use for AI right now is assisting in reducing performance overhead for certain things like denoising noisy 1 step raytracing.

    • @paradoxic1888
      @paradoxic1888 13 дней назад +1

      @@yesyes-om1po This is probably because it is post-process. DLSS/RT/PT has direct access to a game's code and has done extensive learning to allow for consistency across frames. It would have to be something that can directly access the game code and inject changes. Such as analyzing meshes and increasing the complexity, then caching it for direct access.

    • @jodroboxes
      @jodroboxes 2 дня назад

      Two Minute Papers has videos on that subject. One being "Intel's Video Game Looks Like Reality!". Worth a watch.

  • @DrSlick
    @DrSlick Месяц назад +138

    I would rather have gameplay over graphics

    • @pffferphishe4731
      @pffferphishe4731 Месяц назад +8

      righteously said

    • @XPLOSIVization
      @XPLOSIVization Месяц назад +36

      I would like both

    • @Isaak-eo6tr
      @Isaak-eo6tr Месяц назад +20

      Gameplay wise we have regressed from what we used to get. I wish we still had the early 2000s devs with today’s technology.

    • @itzshft
      @itzshft Месяц назад +10

      More focus can be spent on gameplay if you don't need to spend as much time on graphics.

    • @Bluedrake42
      @Bluedrake42  Месяц назад +52

      *whynotboth.gif*

  • @alexandrep4913
    @alexandrep4913 Месяц назад +41

    Hey, dude, I didn't know you were actually studying AI and data science in a graduate program.
    Years ago you had made a video about Godot and you were the among first one to do it if I remember correctly. For some reason it was that video that had something snap in me where I had decided to go forward software engineering. I just stopped doubting myself and stopped giving excuses.
    Anyways, I wanted to say that after a while, and you probably have seen this comment before, if it wasn't for you, I wouldn't be a software engineer now.

    • @Bluedrake42
      @Bluedrake42  Месяц назад +19

      Comments like this make my day. Thank you man.

    • @Wilfrat
      @Wilfrat Месяц назад +3

      Chat this is great

    • @Ozzies
      @Ozzies 25 дней назад

      This is awesome.

    • @jonorgames6596
      @jonorgames6596 17 дней назад +1

      @@Ozzies *wholesome

    • @Ozzies
      @Ozzies 8 дней назад

      @jonorgames6596 - I'll accept that correction, lol.
      I'm not sure why, but we Aussies haven't really caught onto that word. In saying that -- i suppose we aren't really known for having proper word pronouncements, word placement (where we place words when speaking), etc. lol 😅
      Example:
      The world says 'That's really nice' when speaking about a nice..car?
      We Aussies will say: 'It's a beauty, ae' or 'f*n oath that's sick as'

  • @The_Unexplainer
    @The_Unexplainer 29 дней назад +20

    My suggestion for game developers is to not work a lot on the graphics anymore. Build a good gameplay, good UI. Because the textures, shadings, light, water, and small details, will be generated by AI.

    • @paijwa
      @paijwa 12 дней назад

      Sure, if you want really generic looking graphics

  • @TechnoMinarchist
    @TechnoMinarchist Месяц назад +44

    5:20 Imagine in 10 years from now, instead of pirates downloading a cracked game, they just download an AI model that was trained on the game.
    Also, imagine how resource efficient this would be storage wise? Maybe in the future, game devs won't release their games. They'd instead train an AI on their in house game, and only release the AI model.

    • @LancerX916
      @LancerX916 Месяц назад +6

      If that happens, there will be no gaming industry. What company is going to make a game only for it to be stolen by AI. AI is already killing artist jobs in most industries.

    • @TechnoMinarchist
      @TechnoMinarchist Месяц назад +6

      ​​​​@@LancerX916The industry would probably switch to some sort of subscription based model where the incentive to access their server is that they have a curated seed that produces a high quality experience.
      Each game would have a subscriber side, and an open-source side. The subscriber side would have to offer a very good experience to stay in business however.
      Personally I don't like that because I dislike subscription, and already dislike not owning our current games.

    • @GiuseppeAP
      @GiuseppeAP Месяц назад +5

      Before 10 years games will likely be generated realtime using a multimodality/multimedia model. And due to the convergences of training data, the rise of sludge games & the fear of advanced fraud & identity theft, everyone will just privately ask the model they use for all the other purposes in their life to generate their own personalized experiences. Game developers will probably find their self making synthetic data to fine-tune video models & ride the diminishing demand for more training data before the stage when everyone becomes absolute hermits which will likely happen in less than 10 years.

    • @JanBadertscher
      @JanBadertscher Месяц назад +3

      Neural Radiance fields (NeRFs) capture your scene within model weights already.

    • @Low_commotion
      @Low_commotion Месяц назад +2

      Not even that. Just list some games you like along with some gameplay elements and it'll generate a game for you. Want Valheim but with Egyptian aesthetics and a more robust gardening system? What about Battlefield 4 aesthetic but rebalanced for Helldivers-style PVE gameplay? Dark Souls but featuring Akuma or Dante facing the actual Marvel characters he could realistically beat lore-wise?

  • @panacious
    @panacious Месяц назад +17

    wow this is genuinely impressive. this from a guy i know mostly for bookin' around yelling and holding down left mouse button while running at the enemy

  • @jimj2683
    @jimj2683 Месяц назад +8

    That AI model running counter strike is very interesting. They should do the same with data from reality. They could make simulator of the entire planet eventually. Just need data from humanoid robots or people with sensors.

    • @Conbini.
      @Conbini. 22 дня назад

      It's that easy, folks.

  • @AdmV0rl0n
    @AdmV0rl0n Месяц назад +22

    Its going to need a massive uplift in compute, and I do mean massive. And I don't know how you get there are a fair price. If the tech can only run on 5k$ workstations, then sales will be small. Feels like we are at the dawn of new computation, but that dawn is going to need massive shake ups in available compute.

    • @irecordwithaphone1856
      @irecordwithaphone1856 Месяц назад

      Not to mention how the usage of AI is already helping accelerate the death of our planet with how much water it uses up and exhausts power grids. If people use this it will demand even more strain. Not saying the technology isn't impressive, but it's also dangerous

    • @samuel2745
      @samuel2745 Месяц назад

      Haven't you posted this comment somewhere else on RUclips?

    • @samuel2745
      @samuel2745 Месяц назад

      Like I don't mean to be ############### but this seems like a déjà vu checkpoint.

    • @SkemeKOS
      @SkemeKOS Месяц назад +1

      There is going to an insane uplift in CPU power at some point. I watched a few videos a year or two ago.
      I cant remember the details exactly, but basically, the jump in power is going to be absolutely out of this world compared to what we have right now.

    • @AdmV0rl0n
      @AdmV0rl0n Месяц назад +2

      @@SkemeKOS CPU power is at this point, barring changes to AI tech - are a poor mans processor for AI. So, the uplift will need to be massive, NPU additional tech, Or an uplift in already over priced GPUs. Getting back to the core issue - without a tech uplift, and getting it to consumer cost, this will be a barrier to usage (and thus sales).

  • @eugo5578
    @eugo5578 Месяц назад +4

    vr is the next step not played flat screeen since getting quest 3 love it.

  • @MFKitten
    @MFKitten Месяц назад +4

    I think use cases like DLSS/DLAA, where machine learning is used to apply one specific "layer" of effect, is incredibly interesting. Your demonstration here shows the potential AND the limitation, and anyone who has messed with ML stuff knows how inconsistent it CAN be. Reigning it in is a challenge.
    The most immediate current use cases I can think of are going to be things like custom trained models that generate different material textures, complex physics simulations, effects that replace the shaders that we use today, post process effects, generative AI that randomizes variations on textures and stuff so there are no repeated textures, runtime texture upscaling...
    Full frame replacement is very promising, but it's a big ask right now. It's going to be in playable games way sooner than we would have thought, but it's going to be janky for a long time before we can claim to have it under control.
    I wanna see someone utilize the chaotic fever nightmare aspects of generative AI. I love that stuff.

    • @joskun
      @joskun 29 дней назад

      Absolutely. And I don't know much about 3D engine processing yet I know Ai has been stable in terms of models which can have consistent styles.
      I love that idea of gaming companies using that chaos like a sort of esthetics and part of the game where that chaos actually makes sense.
      And I don't know if we have a VR MMORPG yet.
      Imagine an MMORPG like In the style of Cyberpunk and The surrogates, that delivers on how the Division should have been. And open world like Wow 😲 in VR with similar graphics.
      That would be amazing.
      I do think maybe the post processing could metigate the graphics and memory issues with graphics yet I need to learn more about post processing lol.
      Amazing topics and also I side with the gameplay has to be fun for sure

    • @MFKitten
      @MFKitten 29 дней назад

      @joskun imagine a game where you could move between two words: the normal world, and the chaos realm. The chaos realm has an inconsistent and chaotic generative AI post processing effect on it.
      Stuff like that would be sick.

  • @Meatloaf_TV
    @Meatloaf_TV Месяц назад +5

    2:16 I'm sorry wut

  • @123owly
    @123owly Месяц назад +6

    I wonder if they'll start doing "rebuilt for AI remasters" where they take a new game and strip it down to just basic wireframes/colors to give AI direction on what to render. That's gotta be the way to fit this on consumer hardware.

    • @Chronomatrix
      @Chronomatrix 25 дней назад

      And even then the user will need affordable GPUs specifically designed for this. Not happening any time soon.

  • @JuanFranciscoSanda
    @JuanFranciscoSanda Месяц назад +8

    Games would be a playabe movie at that point

  • @JimmyDoyel-by2cp
    @JimmyDoyel-by2cp Месяц назад +79

    I does nothing if the game is not fun.

    • @Jykobe491
      @Jykobe491 Месяц назад +5

      It helps to decentralize the creation of games, so that people will add their own creative spin on combat and stories

    • @MagnitudePerson
      @MagnitudePerson Месяц назад +14

      Bullshit. technology makes games exciting. Look at real time dynamic light in Doom 3, or physics in HL2. The technology IS the gameplay...

    • @iani_2020
      @iani_2020 Месяц назад +1

      True. If the game is not interesting, than nothing will make it good.

    • @Google_Censored_Commenter
      @Google_Censored_Commenter 29 дней назад +5

      @@MagnitudePerson Tell me you've never played a game release prior to 2008 without telling me you've never played a game release prior to 2008.

    • @misberave
      @misberave 29 дней назад +1

      Why you act as if you cant have both

  • @MADMOVIESINC
    @MADMOVIESINC 25 дней назад +1

    That is really awesome, Man! Very cool.

  • @endofwarmusic
    @endofwarmusic 29 дней назад +1

    Squadron 42 with this filter would be so cool.

  • @williamseipp9691
    @williamseipp9691 29 дней назад +1

    I've played with enbs in SkyrimVR and realized how subjective they were. After reading enb author's notes about 'tweaking' settings it became how apparent how much trial and error was involved to produce an acceptable result.
    It's the perfect job for an AI and its just one of many applications.
    Great vid.

  • @SMmania123
    @SMmania123 Месяц назад

    Very impressive! A slight contrast tint to match the surrounding lighting and I think its perfect. Just need to keep the cohesion stable, and this will be a literal game changer!

  • @davekite5690
    @davekite5690 Месяц назад +1

    'been working with 3d graphics for decades now... and I'm convinced this will be the next major shift. There were some examples of AI 'upscaling', over the GTA 6 trailer and it took things to a whole new level... 'kind of wondering if Rockstar will attempt to put this in (drop the resolution+AI upscale+realtime AI filtering (based upon their own training of their own world). LOL - maybe. 'very cool examples of yours [you've gained a new subscriber].

  • @cvpnacorason
    @cvpnacorason 21 день назад

    That is absolutely insane dude! Imagine that in a VR game, that would blow people's mind.

  • @PhotoSlash
    @PhotoSlash 6 дней назад +1

    i'm trying to get such level of realism by myself with metahuman but they still look cartoonish I mean I added custom scanned skin too but still something is off, don't know why skin is so much difficult to replicate accurately but i don't understand what's missing, its like the skin doesnt properly reflect light as it would irl. If u place strategically the lightning then it might work but this implies to have the lights at a low brightness. like what the hell is missing T-T maybe increasing the "oily" effect on skin strategically on parts like forehead, nose, under the eyes might work, not every part of the face reflects the same way

    • @PhotoSlash
      @PhotoSlash 6 дней назад

      also about this tech in the video, pretty cool but AI tends to put too much "hdr" effect on the characters, light is too strong

    • @jodroboxes
      @jodroboxes 2 дня назад +1

      With skin there is also subsurface scattering happening, not easy to get right.

  • @cosmos3576
    @cosmos3576 29 дней назад +3

    *Cyberpunk 2077 would be insane. The big "problem" there is just this, human realism.*

  • @SumPixelz
    @SumPixelz Месяц назад +5

    I don’t know man… it’s cool and all but also very scary.

  • @somethingsomeone9678
    @somethingsomeone9678 Месяц назад +3

    I mean anything AI does feel like we won't have it until there is consistency in the image creates and I don't know what it takes to do that. Even then I believe future is voxels for games. I love voxels. I don't think AI would really limit what you can do in game and get consistent results. In the other hand if we go voxel route; we can have destructible environments, we can have physics, we can have fully material based rendering instead of having objects attached some properties and textures.

  • @JackYM78
    @JackYM78 Месяц назад +3

    What amazing about this tech is that this could be implemented in VR and totally render your real self into it as of how you look in real life but you are wondering in an AI created matrix world or it could copy the real world and you can go there for holiday without actually have to be there for real and you interact with real people that are in the VR as you. Imagine Tinder date with someone that is on the other side of the planet to see how it work out in a date before actually meeting up for real.

    • @Kevin-np3sx
      @Kevin-np3sx 24 дня назад

      probably will be able to make movies for u, just buy a program and tell it what kind of movie u want to watch and it generates it for u.

  • @EagleTopGaming
    @EagleTopGaming Месяц назад +1

    Even not being in real time, that is really nice. Unreal is getting used for animation projects more and more. The current Unreal performance capture can make things feel pretty stiff at times. Being able to touch things up with an AI post process layer might be a great step for a lot of projects to give them that needed layer of style, appeal, or expressiveness.

    • @Bluedrake42
      @Bluedrake42  Месяц назад

      I bet you could do a generative AI layer… and then RE-motion capture off the AI output lol

  • @MattiaHeron
    @MattiaHeron Месяц назад

    I had a similar idea to this using an engine with an easily available depth map that a processor could use, where you'd basically use segmentation + colorID for the models so the AI could use the vertex color of the model to look up details on that model by linking it to a database, so color ID is basically the name of what that thing is and the database contains the prompts for the item for greater accuracy.
    This is a really cool way you have set it up to run here through reshade as I am a reshader as well! I love it dude

  • @countsekou490
    @countsekou490 19 дней назад

    I always thought it would be cool if soldier combat unis could change color with environment. Those soldiers going down the road with snow as a background stood out big time. Like Ant’s on Wonder Bread. Great share. Phenomenal work sir.

  • @NClark1
    @NClark1 Месяц назад +1

    A video game concept: 'Relationship Dynamics' - Video game with people meeting other people and developing friends and lovers.

    • @Rastor0
      @Rastor0 27 дней назад

      OK but they already made Life by You, then canceled it instead of releasing it

  • @DrxSlump
    @DrxSlump 19 дней назад

    The idea of AI picture processing per frame came to me when I first read about DLSS. Glad to see this happening!

  • @Dinoslay
    @Dinoslay 29 дней назад

    So the underlying model and graphics basically being a guideline to the AI filter. That’s a very clever idea.

  • @BRADYLIV
    @BRADYLIV 19 дней назад

    This needs way more views way more views this was absolutely incredible

  • @thronosstudios
    @thronosstudios 20 дней назад

    you're not the first one I've seen doing this, but I'm glad the technique is being used more commonly these days

  • @tangentsmith2961
    @tangentsmith2961 Месяц назад +26

    Even me person who wish to see realistic graphics in games before I'm gone, even I do understand that realistic graphics is a thing for a very limited auditoria. People do care about games. Good games. People do care about art style that was created by people. AI post processing filter means all gamers may experience one game looking if not completely different then slightly different coz AI processing can't be synchronized and predicted. As well as visual artifacts will be here and there. You can see right on Metahuman character face that skin is kinda strange and also hair area skin is white instead of black. All those demos with real games look like a bloody freak show. Nobody needs those realistic bodycam shooters. Those just cool to look at maybe once and that is it. How many people use those GTA5 or Cyberpunk mods that make it more realistic? From what I found it is like from 10k to 60k downloads. Well. Maybe there is more but I don't think so. People are wondering how cool the graphics can be achieved. Yes they are watching those videos with ultra realistic mods but they don't really need it. There are millions of people that continue to play games that looks like PS2 games with better resolution and they are just fine. Blizzard's slaves don't care bout visuals either. Millions are playing anime style games. But only few people need realistic looking game to get away from our realistic reality. Arma3 or Squad players would be happy to run realistic looking something like ArmSquad maybe but wait. How many people are playing Reforger which is like on the way new level compare to ARMA3? 3700 people day peak and 12k all time peak!? Wait what!? Arma3 has 12k 24 hours peak and 57k all time peak? Right! Just few needs that realistic visuals. And why!? Yes! This is the question! Why people don't play way more realistic Arma Reforger!? Because developers are idiots that can't provide gamers with a proper gameplay to their games so there are few people that are ok to entertain themselves. People pay money not for the graphics but for the game! And the game is meant to entertain us!
    So nah! No AI realistic filter will make millions of money to anybody. Perhaps some AAAAA studios are going to try to add it as a feature in the Setting Menu or maybe they will try to sell it separately to see if gamers want it but I'm 100 percent sure that this is just a money wasting dead born idea.

  • @MADMOVIESINC
    @MADMOVIESINC 13 дней назад +1

    Did you use a particular model that is open to the public? If so, which one did you use in your example?

  • @onerimeuse
    @onerimeuse Месяц назад

    I was just thinking about how much I missed your tech discovery videos. Would love if you'd Pepper those in again sometimes

    • @Bluedrake42
      @Bluedrake42  Месяц назад

      At the current phase of Operation: Harsh Doorstop's development tech discovery is basically all I'm thinking about currently haha so you're probably going to see a lot more of these soon

  • @MisterMcKnight
    @MisterMcKnight 13 дней назад

    output is crazy amazing, wow a new era is here

  • @buriedbits6027
    @buriedbits6027 18 дней назад

    You can create some truly mind-blowing effects by targeting specific portions or ranges of the RGB spectrum, adding varying degrees of randomness to those levels. This approach can lead to surreal, unexpected results. If you combine knowledge of image processing (think Photoshop filters) with AI that can manipulate entire frames or specific color spaces like RGB, luminance, or even using LAB color space, the creative possibilities are endless. This opens up exciting opportunities for gaming, where the complexity of rendering these effects likely won’t strain the GPU or CPU much. I really admire your innovative approach to this-it’s incredibly exciting!

  • @courtlaw1
    @courtlaw1 29 дней назад

    Nice, Every game ends up being it's own A.I engine in a smaller hard drive foot print. This is literally the most amazing tech I have seen using A.I in Video games. I would love to see every game use this.

  • @cooperdevs
    @cooperdevs Месяц назад +3

    what did you make here? did you build a custom model, or just use Runway/Luma? this looks almost identical to runway, so I’m just a little confused about what you did

  • @ladylight8585
    @ladylight8585 Месяц назад

    This is super great and I want to learn how to use this, been talking about this for a while now, but instead thought of incorporating into a TV or console directly

  • @JustThyce
    @JustThyce 29 дней назад

    This is really insane! Congrats!

  • @XenobraneStudios
    @XenobraneStudios 18 дней назад

    This is a fantastic use of generative Ai. Great work!

  • @ohheyvoid
    @ohheyvoid 24 дня назад

    We're doing film pre-vis using UE5 - this would be great to solve the metahuman plastic look. Right now we run our image sequence through stable diffusion to get a similar look - but the temporal consistency isn't as good. Using the post-process could be what we're looking for.

  • @Chronomatrix
    @Chronomatrix 25 дней назад

    I see it, you are right, this has remarkable potential. The thing is we might end up making games with very basic low-poly shapes and letting the AI apply all the graphics in whatever style the developers choose, or even the end user. The challenge now is how to generate this post-rendering in real-time, but considering how quickly AI-specific hardware is being developed we might be seeing it really soon, but not for the average gamer. I wouldn't be so optimistic about the timings, big jumps in hardware take a long time as first they gotta manage affordable GPUs, and only then the replacement can begin. It is pointless to make a game that only a tiny minority of users can play. I think we are talking 5-7 years to see the first game with this technology, and I'm being very generous.

  • @Cepheid-IDM
    @Cepheid-IDM 29 дней назад

    Blue drake filter on top of a VR game like Blood Trail would be scary. That game is a maniac simulator

  • @mikeharvester
    @mikeharvester Месяц назад +2

    Impressive. Also, rather frightening.

  • @TheReelDeal-jq4qt
    @TheReelDeal-jq4qt 29 дней назад

    Every game studio should definitely be using this! THAT would be something to see in VR (using UEVR mod)

  • @ryancory5958
    @ryancory5958 22 дня назад

    I've been thinking about, I guess this sort of thing but I'm not tech-savvy in the field, but I assumed that eventually the Generative A.I. would allow us to play through old games, and it would update all the textures and animations, I was thinking Everquest (the mmo from 1997) you could give all the npc's A.I. with personality subsets and information using the tons of lore so you could interact with them,
    but the development A.I. would allow you to speak to it, as a developer, so you could be running through zones, and interacting with npcs and having a conversation with the dev A.I. software to direct/suggest/change/implement etc
    I feel like it could really bring back alot of otherwise seriously dead games that used to be good but are very aged not just graphically, but mechanics/animations/interactions could all be implemented into the game, by just having a conversation with the A.I. dev kit tools, you could make a suggestion or drag and drop a video of a person doing a back flip or a dragon spitting acid, or a picture of some texture from real life, and poof....into the game it goes.

  • @SolarScootersuk
    @SolarScootersuk 29 дней назад

    Imagine reliving OLD video games using this tech... Like Spyro! Or Crash Bandicoot

  • @Docswavestranslucent
    @Docswavestranslucent 15 дней назад

    Implementing AI-driven enhancements to improve video game performance and visual quality is a multifaceted endeavor. The timeline for such a project depends on various factors, including the complexity of the game, the specific AI techniques to be integrated, and the resources available. Here’s a general breakdown:
    1. Research and Planning (1-2 months):
    • Objective: Understand the current state of AI technologies like DLSS, FSR, and AI-enhanced ray tracing.
    • Activities:
    • Review existing literature and case studies.
    • Define project scope and objectives.
    • Identify necessary tools and technologies.
    2. Skill Development (2-3 months):
    • Objective: Acquire the necessary skills in AI, machine learning, and game development.
    • Activities:
    • Enroll in relevant courses or training programs.
    • Practice with AI frameworks and game development tools.
    3. Prototype Development (3-4 months):
    • Objective: Create a basic version of the game incorporating AI-driven features.
    • Activities:
    • Develop core game mechanics.
    • Integrate AI upscaling and dynamic resolution scaling.
    • Test for functionality and performance.
    4. Testing and Optimization (2-3 months):
    • Objective: Ensure the game runs smoothly and efficiently.
    • Activities:
    • Conduct performance testing across various hardware configurations.
    • Optimize AI algorithms for real-time execution.
    • Address any bugs or issues identified.
    5. Finalization and Deployment (1-2 months):
    • Objective: Prepare the game for release.
    • Activities:
    • Finalize game assets and content.
    • Conduct final quality assurance tests.
    • Deploy the game to the chosen platforms.
    Total Estimated Time Frame: 9-14 months
    Note: This timeline is an estimate and may vary based on project scope, team size, and unforeseen challenges. Engaging with experts in AI and game development can help streamline the process.

  • @arnoldhagen8577
    @arnoldhagen8577 Месяц назад

    Unreal Engine is rather impressive, but I swear that about a week ago, this was exactly what I was talking about in a comment, regarding next gen graphics. I'm happy to see that someone who actually knows how all of this functions has already started to implement it. I look forward to the future of this post processing technique. Great job!

  • @prattyv9208
    @prattyv9208 19 дней назад +2

    This is impressive but we are at the point where graphics has reached a threshold where it doesn't hold as much of an importance compared to really creative and unique gameplay mechanics, better feeling ai interactions and world interactions and gameplay loop that is fullfing and satisfying. I want ai to enhance those aspects rather than just improving graphics and visuals which already feels quite good and at points very samey.

  • @davidj9977
    @davidj9977 12 дней назад +1

    The amount of memory and compute used by AI image generators is much more than even raytracing required.. so while the results are interesting, expect AI imaging to stay "offline" for quite a while.

  • @23dsin
    @23dsin 17 дней назад

    I tried this 2 years ago but then generative AI was much slower so not for real-time use. It works on single screenshots from games. This will be the future for gaming.

  • @orangesunlabs
    @orangesunlabs 15 дней назад

    Yeah, this is amazing. Also looking forward to real time lip sync between players. I could see this being used in any open world, or block buster movie like Raiders of the Lost Ark, StarGate, Star Wars, Star Trek, you name it, sounds super fun!

  • @buriedbits6027
    @buriedbits6027 18 дней назад

    I wrote an article when the ps3 came out where Sony demoed some very realistic, next level face gestures. It was amazing. You made this? Incredibly good.

  • @benmcreynolds8581
    @benmcreynolds8581 29 дней назад

    THIS is exactly what I have been hoping we see these tools and advancements used for!!!! Awesome work dude! Could you use it for artistic aesthetics as well instead of only realistic? Can you improve physics effects in games?

  • @Sinoxqq
    @Sinoxqq 19 дней назад

    I love how people undersell this when its gonna be mindblowing, the WHOLE of skyrim looking like this would be so immersive, id spend hours upon hours just looking at everything.

  • @rafihmahfooz5074
    @rafihmahfooz5074 13 дней назад

    This is interesting and you should pause your studies and focus 100% on this technology. You should do an IP on it first and license it out to bigger studios.

  • @vigamortezadventures7972
    @vigamortezadventures7972 22 дня назад

    My god ! Awesome also you could in theory train Ai on character animation and camera with controller inputs to simulate 3d cut back on actual 3d effects

  • @yaladdin86
    @yaladdin86 29 дней назад

    This is actually incredible. It really will be a game changer, and game graphics will be even more believable, less uncanny characters and ultra realistic. I realize something, when I play a AAA game made in UE, I'm always swept away by the amazing graphics and cutscenes, but at the back of my head I still have that thought "But it still feels uncanny, not super realistic" for characters. Maybe this is the thing that will remove this uncanny CG feel of the characters in games in the future.

  • @Markios-g4g
    @Markios-g4g 24 дня назад

    These are the most accurate human face models i've seen that respond that well to someone's in real life face.
    I can definitely see this becoming indistinguishable from reality if RUclips won't put a tag over them specifying - A.I. video.

  • @unraveledultimatefate
    @unraveledultimatefate 27 дней назад

    I would like a renewal of Half Life series, Quake series, and a reboot of DOOM 3 as a tech demo for a survival sci-fi horror making a come back!

  • @unkownnameless2699
    @unkownnameless2699 Месяц назад +1

    Unreal 1 with this system would be mind blowing, but we haven't seen how well the system actually manages 3d environments, just people

  • @cavadorarpeiro4973
    @cavadorarpeiro4973 21 день назад

    I Loved that video!!! Could you make more video about those new tecnology! I cant help watchng a lot of these videos in RUclips 😊

  • @ruimario.productions
    @ruimario.productions Месяц назад +1

    this could be really good by training the ai on the human faces that models are based off of

  • @YSLWOODY5STAR
    @YSLWOODY5STAR 18 дней назад +2

    I can wait 3 more years longer for GTA 6 if they applied this Post Processing into their engine.

  • @devilsposterboy
    @devilsposterboy Месяц назад

    0:03 i let out a deep "wooOOooahhhh......."

  • @sash4all
    @sash4all 17 дней назад

    As an artist who loves it, I would like to see AI in future just as a tool to bring your designs to life. Rigging, skinning etc. all the time consuming stuff could be done in minutes.... but what I truly believe is, that AI is more and more the death of creativity.
    It's hard enough to make something completely new, but AI is doing it in seconds. I don't think I like the direction where it went... this is just the beginning.

    • @Felhek
      @Felhek 3 часа назад

      So according to your psychology, the more time you need to create something, more creative you feel.
      Less time, less creative.

  • @DavidSpecialis
    @DavidSpecialis 22 дня назад

    at some point are we going to blur the line, between useful and realistic, if it works perfectly in real time. this will be a gamechanger for multiple studies that love to storytell with realistic characters, like rockstar, naughty dog, and a lot of other studios that would love to put work in that direction. this is awesome man. keep up the great work, I hope you capitalize more on the work you are doing to collect these awesome upcoming systems. i really enjoy your channel

  • @finfrog3237
    @finfrog3237 Месяц назад

    I'd like to see this used in racing, driving sims.
    It looks amazing! Impossibly so

  • @HK94
    @HK94 Месяц назад

    Super excited to see where this leads to.

  • @petrofsko
    @petrofsko Месяц назад

    Wow amazing can't wait to experience these character graphics ingame oneday

  • @wannabelikegzus
    @wannabelikegzus 28 дней назад

    I'm most excited for what these kinds of things will do for indie films. It feels like we're already at the point that a two or three man team could use all of Unreal's environment and animation automations to build something that could potentially match a Pixar level animated film.

  • @button9
    @button9 Месяц назад

    I do think it would be cool if people had the option in a character creator to just type out what their character looked like or pick from check boxes to help the prompt. This could actually go further in another direction where you can create your own avatar from photos without it looking like a pasted on texture map.

  • @zil0484
    @zil0484 16 дней назад

    That hyper realistic graphic with current physic engine will make games more even funny and creepy when there's bug

  • @ResoluteDeicide
    @ResoluteDeicide Месяц назад

    I had no idea this existed! That was insane to witness.... the implications of this tech are incredible to think about.
    Just beware of those who would fear-monger about this tech. Calling it right now, we're going to see a bunch more of that soon.

  • @Moctop
    @Moctop 24 дня назад

    Technically you'd be able to go back to your old favorite games and run an ai filter on them for a dramatic makeover. This is definitely the direction we'll be heading. Development time would drop drastically if you could use fairly basic models since they're gonna be reimagined anyway, yet look way better. You could also easily customize/personalize the look of a game, modding and what not. Exciting times.

  • @aidanbobhog6303
    @aidanbobhog6303 Месяц назад

    This is something I have been curious about since the first ever image gen AI

  • @andhikayt
    @andhikayt 29 дней назад

    Amazing! cannot wait.. all character in Call of Duty will be like this!

  • @eugkra33
    @eugkra33 29 дней назад

    I've been saying this for months, and no one believed it. Thing is that we don't even base graphics underneath to be good. You can take Xbox 360 level graphics, and save a massive amount of time on rasterization, and 90% of the time on the machine learning filter.

  • @SiimKoger
    @SiimKoger 24 дня назад

    People like to say that graphics don't change the gameplay and for 99% of cases I'd agree... but I feel like if graphics get to this level then people will actually start playing games in a different way... if the characters in Skyrim looked like this and the animations physics was realistic as well then I would definitely treat the NPCs better and take a more down-to-earth approach playing it.

  • @GhostBranchFPV
    @GhostBranchFPV 22 дня назад

    I've been deep-diving into AI and training my own models for a couple of years now and I can tell you, we won't have to wait for an RTX 8090 to see this in games. At the current development speed of AI technology, we'll see a generative post filters like this running at 60ish FPS within the next two years, maybe even next year.

  • @TheVoiceOfChaos
    @TheVoiceOfChaos Месяц назад

    having it run in real time would be a major jump honestly, i think that should come sooner than a whole game being ai generated which frankly sounds like a pipe dream, but hey idk what is and isnt possible with neural network systems.

  • @LeonGustin
    @LeonGustin Месяц назад

    YES!! People are so sceptical that this tech will be here sooner then they think. Runway Gen 3 video 2 video tech does this amazingly! This will be here in real-time within a year!

  • @JoeyBComer
    @JoeyBComer 17 дней назад

    What an insane video. Thank you for sharing, and demonstrating.

  • @lvdon4415
    @lvdon4415 4 дня назад +1

    PLEASE run the polar express through this filter!!!

  • @JanBadertscher
    @JanBadertscher Месяц назад

    Questions:
    1. Are you just applying a diffusion model to the 2D representation of the 3D scene or to the source textures after applying shaders or feeding the 3D scene information and texture to a custom model (I doubt the last one, as you would have to train your own)
    2. Is the method spacio-temporally coherent?
    3. Do you think using LLMs native audio multi modality like llama-omni, etc. will be implemented for more natural NPC interaction soon? Modern LLMs are available for almost 3 years and not a single game uses LLMs for natural NPC interaction (audio / text etc.)

  • @beyond-thenoise
    @beyond-thenoise Месяц назад

    I have been pondering the same thing lol. I'm just further behind I guess lol. Very cool to see other devs making leaps in this direction

  • @PromptKing
    @PromptKing 21 день назад

    THANK YOU THAT IS AMAZING!!!!!!!!!!!!

  • @jollygoodfellow3957
    @jollygoodfellow3957 29 дней назад

    I think this could be done in a multi GPU setup. One GPU runs the game, whose video is passed to the second GPU, whose job is solely to run the AI model, and the output is passed to the screen. This way you can run higher frame rate.
    Essentially it's like running a video capture software on capture cars which is processed by another GPU.

  • @MrRandomPlays_1987
    @MrRandomPlays_1987 11 дней назад

    4:45 - That was exactly my assumption, I knew this was not in real time, it had to not be since AFAIK there's nothing comes close to diffusion being in real time on average or above average GPUs yet, there's that new demo of diffused AI rendered Minecraft but it's something else entirely and there it still requires super powerful GPUs (way above what consumers would ever own currently) to run it even at low frame rate barely and in low quality but yeah this would be a reality in the future where you have full video games being rendered with AI diffusion on top of the simpler version (base game graphics itself) and then it's going to be a game changer lol.