Комментарии •

  • @Jeal0usJelly
    @Jeal0usJelly 9 месяцев назад +103

    What a time to be alive!

    • @Clawthorne
      @Clawthorne 9 месяцев назад +10

      What a time to be alive!

    • @kcfresh53
      @kcfresh53 9 месяцев назад +26

      Get your papers fellow scholars

    • @brodriguez11000
      @brodriguez11000 9 месяцев назад +1

      The industrial revolution was indeed a good time.

    • @FuZZbaLLbee
      @FuZZbaLLbee 9 месяцев назад +11

      Holding my papers tightly

    • @Eichro
      @Eichro 9 месяцев назад +10

      Imagine where we'll be two more papers down the line

  • @nicolasdiolez
    @nicolasdiolez 9 месяцев назад +129

    Very proud that you used my Arc of Triomphe model as the example for photogrammetry 😅 cool video, I need to learn gaussian splatting, it seems crazy good!

    • @nicolasdiolez
      @nicolasdiolez 9 месяцев назад

      ​@@JohnDavid888 Thank you for the insight! It seems that, for now, it's not suitable for professional use, but I suppose it's going to evolve.

  • @johnny2552
    @johnny2552 9 месяцев назад +30

    Bro your channel is unlike anything else, I appreciate you moving like a madman on getting these videos put out for game devs and artists. Thanks!

  • @theaninova
    @theaninova 9 месяцев назад +19

    I think what probably stands out most to me is that no matter what angle you pick, it just doesn't look like a bad render. It looks like a blurry or smeared photo, or maybe a painting with a particular style. I guess the big question is gonna be can we apply lighting in real time to it in some form, and can we compose multiple of them together. It seems to be really really good at rendering trees, they just looks so fuzzy and detailed from a distance even when they're just a few blobs. I'd be interested to see a racing game with a full scan of the track using Gaussian splatting for the more distant environment and traditional rendering for the road and car.

    • @s4shrish
      @s4shrish 9 месяцев назад +1

      I feel like this renders stuff closer to the way we as human perceive stuff.
      Like a point of light when defocused is circular stretched in shape. Basically a bunch of dots that bleed into each other more or less based on focus. Which is kinda this as well.

  • @marsimplodation
    @marsimplodation 9 месяцев назад +19

    No way you are talking about this paper right now, was about to read it tomorrow for college related stuff xD
    I have a course on computer graphics dealing with the latest research on bachelor level concepts and this is one of the possible papers to work with

    • @gamefromscratch
      @gamefromscratch 9 месяцев назад +9

      Start with the Aras rundown before jumping into the paper, it's a really elegant TL;DR summary.

    • @marsimplodation
      @marsimplodation 9 месяцев назад +3

      @@gamefromscratch thanks I will. Will need to read that paper either way tho

  • @Kumodot
    @Kumodot 9 месяцев назад +19

    Amazing breakdown to this tech. One thing that I want to see, and probably will happen real soon is a combination of many Gaussian splatting scenes to cover bigger areas and single assets ready to compose scenes

    • @vitordelima
      @vitordelima 9 месяцев назад +2

      NeRF seems to have something like this already and maybe it can be adapted to Gaussian splatting.

  • @capsey_
    @capsey_ 9 месяцев назад +5

    I feel like the perfect usage for this tech is Google Street View (especially in VR). You don't need dynamic lighting and objects, target object details are important and having big open world is not a requirement. I wonder if it's possible to have multiple scenes using splats and smoothly transition between them as camera moves to make road moving in street view less weird than it currently is.

  • @kyoai
    @kyoai 9 месяцев назад +13

    I could think of this being used in parallel with traditional methods : Use this method to render specific static mesh models that are high in detail, while other parts of the game world, especially dynamic parts that are animated, stay as-is with polygons + textures.

    • @vitordelima
      @vitordelima 9 месяцев назад

      It can be animated by transforming the particles the same way it's done to vertices in regular models for example.

    • @jlewwis1995
      @jlewwis1995 9 месяцев назад

      @@vitordelima yeah I don't see why you couldn't at least to simple animations on the models (though considering the high point density that's probably required maybe it would be best to only use simple n64 style animation (with the different parts of the model being separate) and not full on skeletal animation for now

    • @UltimatePerfection
      @UltimatePerfection 9 месяцев назад +2

      @@uusfiyeyh I'm sure that it's a hurdle that we'll eventually overcome, just like in early 3d games all shadows and depth was baked into the diffuse texture, only later we've got stuff like bump and normalmaps. But yeah, for now it is more of an archvis/survey tool than actually useful for gamedev.

    • @morgan0
      @morgan0 9 месяцев назад

      and it could be useful to turn a complex highly detailed scene with raytracing into something that could be played on much more normal hardware, maybe with some level of reactivity added in to allow the stuff that can move to interact with it

  • @augustday9483
    @augustday9483 9 месяцев назад +28

    So far it seems like these scenes are basically one big thing that you import into your project. To make it usable for a proper game, I'm imagining a future where you have individual models composed of splats (for example, a bike or house) which can then be imported into a larger scene. However, the problem with that is that these splats seem to have their lighting baked in. If you moved the bike into a different scene with different lighting, it would look really out of place.
    I find it hard to imagine that this would ever take over and replace polygonal rendering.

    • @olwiz
      @olwiz 9 месяцев назад +6

      Oh but its possible. Just not yet. Youre forgetting this has just been released without optimization witch the authors thenselves realize- then theres no hardware tuned to this. Polygon rendering was created in the 60s or so in computers, took a few decades until we had hardware tuned for poligons and the tuning never stopped. The first gpu tuned for this will likely be x3 times the current performance (the first step always have the bigger gains). There may be some algorithm possible to make the blobs shift for light but even if not you forget how software development and games in particular have a history of using tricks... For example they may use 'invisible poligons'(simpler and with no texture) for colision but also someone come up with an way to use said poligons to inform the lighting; So say first iteration would be horrible (splashs AND poligons, heavy) but then they update the methods, find shortcuts between splash data and polygons, simpler polygons needed, gpus get in... In the past few decades we had what, 4 different named anti-aliasing methods, lighting methods and so on- not only each aproach improved so did the gpus tuned for the tricks used, then support software around it too (like directX, vulkan etc)...
      ...and something like the above would be possible without adding AI in the mix- now with ai? The same way ai is being trained for upscaling and frame generation, ai on the gpus maybe even with a dedicated chipset could be trained to deduce and recreate light and shadows from splashs on the fly. And were talking today tech only, god knows what new breakthroughs we will have on hardware or neural networks- all the fast pace weve been seeing had zero new milestone improvements(hardware wise). Just the other day ive read intel is tinkering with glass for chipmaking that could break a physical barrier for computing currently
      I 'predicted' current gen AI like a decade ago when i first read on neural networks at UNI wich at the time was far faaar away from anything usable yet. Im no seer, im far fron the only one. You just need a bit more imagination to extrapolate the likely path of current gen tech, the entire industry does it- the only incognita is how long it will take and how exactly, but very close aproximations are very easy.
      And i dont think this will take a decade to come up. It may, we can never know, but besides the current pace with AI and gpu tuning via AI we have to remenber that nerfs and such splash tech is already like a decade old...
      Heck i just realized the kind of tricky nanite did for polygons could come up for splashs too- something between algorithm, lods and ai around density of blob/splashs, so it could have higher resolution(points) then the examples in the video for things seen up close but dynamically lower the density on the background, distance to camera and all... The more i think about it more possible aproachs come up. You just wait, academia will be all over this with students trying different stuff and as soon as the first gpu or drivers tune for it you bet game devs will give it a spin too- the folks at unreal and unity definetly... heck the way nvidia is the moment they saw this some calls were made for a new team to toy with this

    • @Teodosin
      @Teodosin 9 месяцев назад +3

      Surely that lighting problem can be figured out. Just needs time for people to figure it out.

    • @jensenraylight8011
      @jensenraylight8011 9 месяцев назад +3

      Hacky solutions always leads to ton of soul crushing Cleanups afterwards.
      professionals found this out the hard and painful way.

    • @Teodosin
      @Teodosin 9 месяцев назад +1

      Wow, such pessimism

    • @jensenraylight8011
      @jensenraylight8011 9 месяцев назад +3

      @@Teodosin not pessimistic but being a realist.
      It's easy to be overly optimistic if you're an amateur that had zero knowledge of how things work.
      Also people who used this kind of hacky technique don't give a damn about art direction, this kind of thing is just something that gets in their way and should be eliminated.
      Which will result in generic game.
      At this rate, you should just write a prompt to make a full game for you,
      why bother create a model or write a single code?
      You Already generate the model, why did you stop halfway, go generate the whole dang game

  • @Kumodot
    @Kumodot 9 месяцев назад +72

    I really want to see applications using Gaussian splatting in VR in something like the quest3. That needs to happen!

    • @kuromiLayfe
      @kuromiLayfe 9 месяцев назад +8

      It is pretty much the Unreal’s Nanite tech but on a cloud point level instead of polygonal, biggest issue is that at a larger depth you get a noisy dithering effect, which especially in VR can cause nausea when rendered at 90+ fps in real time… amazing for still scenery but not so much for motion.

    • @fledgeking
      @fledgeking 9 месяцев назад +3

      I think the quest 3 would probably have trouble with the polygon count, the render distance might have to be pretty low.

    • @steven11101010
      @steven11101010 9 месяцев назад +5

      I assume you are referring to being able to navigation in a real world. But that's not really the use case. The key issue is the way point clouds are generated. They are generated *around* an object. That's what enables the recreation of the object in 3D. For recreating environments, you need the inverse, which isn't this. You can see the issues in the video when Mike ventures just a few yards from bike.

    • @pixelfairy
      @pixelfairy 9 месяцев назад +1

      Xr2 is more about texturing than geometry. It's the opposite of what it's made for. You could post process into a low poly model, but then you might as well use nerf or traditional pg.

    • @fledgeking
      @fledgeking 9 месяцев назад

      @@pixelfairy Yeah, they're kind of a package deal

  • @joshwent
    @joshwent 9 месяцев назад +61

    Absolutely jaw dropping technology. So many practical applications for simpler photogrammetry type tech; virtual museum walkthroughs, interior building walkthroughs like google maps but indoors, even maybe self scans to send to a telemedicine doctor. Just endlessly cool possibilities!
    For games however, I'm honestly not excited about this. Graphics with even just a pinch of intentionally designed style are much more immersive to me than just playing in a perfectly representative world. I already spend every day IRL, show me something NEW! 😁

    • @gamefromscratch
      @gamefromscratch 9 месяцев назад +28

      Ok.... what about a Wallace and Gromit or Fraggle Rock style world, but physically modeled then captured with Gaussian Splatting? Or old style stop motion Harry Hausen style worlds, but scanned and playable! ;)
      Although honestly a traditional pure CG workflow would probably still be cheaper and more effective.

    • @joshwent
      @joshwent 9 месяцев назад +8

      @@gamefromscratch Clayfighter 2023?! I love it! 😆

    • @carpenterblue
      @carpenterblue 9 месяцев назад +10

      ​@@gamefromscratch Actually, youtuber Olli Huttunen did really cool test where he used 3D model made in Blender and converted it to 3D gaussian splat. You absolutely can mix and match. The splat is built from sequence of pictures. Technically speaking you can animate flythrough of a room on paper, scan it, send it to computer and have 3D splat of that.... if you are insane enough that is.
      Also.... I think, there is high potential for someone just straight up building a sculpting tool/painting tool eventually in the vein of quill.
      This is absolutely GIANT thing for games.

    • @JB-fh1bb
      @JB-fh1bb 9 месяцев назад +8

      The Gaussian splat doesn’t have to use point clouds or photos and could be the actual rendering engine for 3D games. The biggest improvement here over traditional pipelines is the *massive* reduction in computing while maintaining (and arguably improving) visual quality. Imagine this being used for a next-gen version of Dreams that can be played on the Quest 2.

    • @Vaeldarg
      @Vaeldarg 9 месяцев назад +4

      The interesting thing to me is this tech looks familiar: when was looking at companies in the VR/AR space, when "light fields" was causing buzz through ones like Magic Leap, found an obscure company named "Euclidean". Their idea was using point clouds (the "light fields") for VR, and liked showing off the detailing. This seems to be a much-improved evolution of that.

  • @georgezubat7225
    @georgezubat7225 9 месяцев назад +42

    This really reminds me of dreams. Very detailed in specific areas, but when you try to remember non-focal elements it just isn't there. This would be a great artistic rendition of dreams!

    • @nebuchadnezzar916
      @nebuchadnezzar916 9 месяцев назад +1

      Interestingly, when I used to have OBE's, I experimented with focusing on distant details and things were grainy, not unlike this.

    • @bgrz
      @bgrz 9 месяцев назад

      This technology has a lot in common with the game/engine called Dreams on Playstation.

    • @DrkFX
      @DrkFX 9 месяцев назад +4

      Agree, looks similar to how Flecks are rendered in Bubblebath engine (Media Molecule's Dreams, Playstation).

    • @georgezubat7225
      @georgezubat7225 9 месяцев назад

      I always wondered how the rendering in that game worked!@@bgrz

    • @TeckGeck
      @TeckGeck 9 месяцев назад +1

      I was thinking of Dreams on the PS4 too

  • @maymayman0
    @maymayman0 9 месяцев назад

    Mike your channel is awesome and thank you for covering all the different stuff you do!

  • @webgpu
    @webgpu 9 месяцев назад

    i rarely comment on videos about its quality and content, but i had to come here to congratulate this channel's creator because of the good rhythm, speed and clarity of the pronunciation on a technical topic.

  • @kreur
    @kreur 9 месяцев назад +4

    I think we will see this sooner in more static-ish applications like real-estate virtual tours. Maybe some experimental games that happens in static-ish environment like 1 house. But who knows maybe it will be game production ready in a year.

  • @_remblanc
    @_remblanc 9 месяцев назад +3

    I can imagine someone pulling the PS1-era tricks with models running over these at fixed-camera angles to produce a scene. It would be a highly unorthodox workflow, though, and quite pricey at that.

    • @ekstrapolatoraproksymujacy412
      @ekstrapolatoraproksymujacy412 9 месяцев назад +2

      it works as blobs in 3d space, just like volumetric cloud or fire, no need for any "PS1-era tricks" it can coexist with standar mesh model rendering no problem

  • @OutrunCitizen
    @OutrunCitizen 9 месяцев назад +5

    What we need is for someone to make a Gaussian Splatting modeling program.

    • @MaikoYT
      @MaikoYT 8 месяцев назад

      Why model when you can simply create that object in real life and scan it in?

  • @MrEnkelmagnus
    @MrEnkelmagnus 9 месяцев назад +7

    Finally someone explained all this cool new tech in a way i understand.

  • @carlosrivadulla8903
    @carlosrivadulla8903 9 месяцев назад

    what a time to be empathic!

  • @DessertMonkey
    @DessertMonkey 9 месяцев назад +2

    Last time I saw graphics like this, they said "these are grains of dirt".

  • @0rdyin
    @0rdyin 9 месяцев назад +1

    This tech can be a great alternative to traditional rasterization for backgrounds in interactive story games like 'Her Story'..

  • @3govideo
    @3govideo 9 месяцев назад

    I’ve been following since Luma Ai got NeRF going and is amazing what we can get just by recording with our phones. Hope soon they can produce a lighter player. 🔥 thanks for the teaching on high-end-terms 🚀

  • @Braindrain85
    @Braindrain85 9 месяцев назад +2

    Point cloud approaches are definitely very cool. And this one is even more pretty. Though they always come with a couple of downsides when it comes to lighting, animation, etc.

  • @gabe2o2
    @gabe2o2 9 месяцев назад +1

    Pretty cool tech, but I do wonder how it would play with different shaders, fog effects, and lights we control in the scene. At least these are the immediate curiosities of mine. Shaders cause ima stylized boi, and shaders is how I accomplish this even when models are originally made in a more photo-realistic manner. Otherwise, seeing how drastic changes in lights and fog density mix with the tech would truly be an awesome little demo to see

  • @Patapom3
    @Patapom3 9 месяцев назад

    SH is for Spherical Harmonics: it's the precomputed lighting environment.

  • @Reavenk
    @Reavenk 9 месяцев назад

    I could definitely see this getting momentum for capture; way more practical than lightfields.
    But seems like an uphill battle for real-time uses. The lighting may be dynamic, but those dynamics are baked. And I'm guessing there is a lack of frustum culling, and all the particles need to be sorted to properly alpha blend?

  • @WifeWantsAWizard
    @WifeWantsAWizard 8 месяцев назад

    It occurs to me that pairing splatting with traditional modeling in video games could be the wave of the future if the splats are restricted to distant background objects and detailed foreground objects are replace by LOD-correct alternatives as the player's avatar approaches.

  • @dvelasco
    @dvelasco 8 месяцев назад

    Basically Maya´s Paint Effects applied to a photogrammetry-derived point cloud. Ingenious!

  • @ScibbieGames
    @ScibbieGames 9 месяцев назад +7

    I was thinking about working on a Godot implementation but because instanced rendering (with MultiMesh) can't be culled on a per mesh basis I was uncertain of whether it was achievable with decent performance. But I'd like to hear from more experienced Godot developers, cause I'm a noob.
    The reference rendering implementation is also a fairly complicated one that used cuda to efficiently order the splats for rendering. Which doesn't translate well into the Godot renderer to begin with.

    • @vitordelima
      @vitordelima 9 месяцев назад

      Some renderers use transparent quads aligned with the viewer (similar to Doom's enemies and items) to render this instead.

    • @SMorales851
      @SMorales851 9 месяцев назад +1

      One could use compute shaders to oreder the splats, but I don't think Godot allows you to perform compute operations on the main RenderingDevice, and there's no way to share buffers between devices without transferring the data to the CPU and then to the other RD, which would be slow.

    • @Polygarden
      @Polygarden 9 месяцев назад

      It's like a smart interpolation between different cameras. But this exact feature is also it's disadvantage, as it's quite hard to remove said lighting from the source, to create usable game assets. You have stretched splats which also contain the lighting. (and in this case only the lighting as you have captured it) Depending from what angle you look at it, they are differently stretched and as such are able to shade your scene correctly, but the "lighting splats" are belonging to your scene in the very same way as solid objects. It is probably possible to make it work, but you will have hard times to remove the lighting from those. (and this is needed if you want to combine multiple assets and/or different scans) It's an amazing tech, but my guess is that's it's rather useful to capture true 3D photos for personal usecases.

    • @NeoShameMan
      @NeoShameMan 9 месяцев назад +2

      It's a point cloud, unstructured. Just bucket the splat in voxel and traverse voxel from the camera, in a dull, then pass the ordered data to the renderer.

    • @NeoShameMan
      @NeoShameMan 9 месяцев назад

      @@Polygarden we aren't talking about mesh, it's not representing surfaces but lightfield. That's why a traversal using voxel make sense as a lightfield query. We aren't trying to reconstruct volume.

  • @deluxe_1337
    @deluxe_1337 9 месяцев назад

    This is great for filmmaking.

  • @seraaron
    @seraaron 9 месяцев назад +4

    God it looks like a dream when you get to the perifery of the scan

  • @bcmpinc
    @bcmpinc 9 месяцев назад

    Im amazed at how it captures specular lighting. It's quite visible on the roof of the church.

  • @Arisilde
    @Arisilde 9 месяцев назад

    This reminds me of how Media Molecule's "Dreams" works.

  • @BrianDamageYT
    @BrianDamageYT 9 месяцев назад

    Kind of reminds me of the landscape rendering technique from the old Ecstatica games.

  • @studioopinions5870
    @studioopinions5870 8 месяцев назад

    I think the best way to make use of this 3D Gaussian Splatting, is to be integrated with AR Glasses, and combine animated Characters into the scene. That way it will seem like a Virtual Holodeck of Star Trek. Maybe can use a kind of camera tracking feature of Blender or Unreal, and such to make it possible to put moving characters in a story like setting. Just my thoughts! Terry

  • @y1QAlurOh3lo756z
    @y1QAlurOh3lo756z 9 месяцев назад +5

    Could this be used to bake "bake" extremely high fidelity 3D scenes into point clouds and then splat them in runtime?

    • @altongames1787
      @altongames1787 9 месяцев назад +1

      I understand what you mean, but why would you wan't something less performant?

    • @drdca8263
      @drdca8263 9 месяцев назад +3

      @@altongames1787hm? The idea is that you could take some other rendering method that *can’t be done in real time*, and use it to create a Gaussian splatting scene which *can* be rendered in real-time.
      (People have tried this. It seems to work pretty well!)

    • @NeoShameMan
      @NeoShameMan 9 месяцев назад

      ​@@altongames1787800 fps is quite performant in my opinion

  • @constantinosschinas4503
    @constantinosschinas4503 9 месяцев назад

    Gaussian Spatting seems to be just image mapping, on feathered particles. The texture of each blob changes according to the ciewing angle, picking the original photo that best matches the angle, or a close angle that does not blocks view to each splat. Filesize must be quite big, compared to traditional, static texturing.

  • @dukemagus
    @dukemagus 9 месяцев назад +4

    This will be crazy if you mix this tech with Google maps/earth data

  • @DanielNistrean
    @DanielNistrean 9 месяцев назад

    M1 Max has a 24Core and a 32Core GPU. Just ordered one for mix of mobile/hobby game development. Continuing to watch the video..

  • @ardonnie
    @ardonnie 9 месяцев назад

    Seems like it’s just a matter of generating enough point clouds and pairing them up with descriptions before we can make generative models to create new seems just based on a description.

  • @ozanyasindogan
    @ozanyasindogan 9 месяцев назад

    Looks amazing and practical. If they can use some NN on it to remove unnecessary particles and actually convert and split objects, that would be the end I believe.

  • @dzft3w
    @dzft3w 9 месяцев назад

    It works well on mobile too! interesting

  • @devilofether6185
    @devilofether6185 9 месяцев назад

    Gaussian splatting reminds me of the rendering engine in dreams

  • @joloppo
    @joloppo 9 месяцев назад

    The spiky bits of light make it look exactly like when scenes load on assassin's creed. ... Which was basically loading into a sim in the context of the game. crazy

  • @shydun
    @shydun 8 месяцев назад

    i feel like this would be good if you could separate each prop, somehow type the dots to a dummy mesh and then decorate the level

  • @MrAuxiom
    @MrAuxiom 8 месяцев назад

    Wow that remind me so hard the Virtues in Cyberpunk

  • @paulwhiterabbit
    @paulwhiterabbit 9 месяцев назад

    this could be a thing in the future but will only prosper in static 3d space viewing since a 3d polygon is much more practical in a dynamic real-time environment that games need. I just hope the tech to stream humongous file sizes faster than what we have comes sooner

  • @ludologian
    @ludologian 9 месяцев назад

    thanks for sharing, I saw this weeks ago sorry in another repo sorry to not mention it..
    I want to implement unity volume point editor similar to Nvidia workflow also it's a great thing to implement delighting tool and neural decompression algorithms. ( long project goal) inshallah

  • @zahir3d
    @zahir3d 9 месяцев назад

    Tks for the video, is it possible to export it at the end to a 3d format? (fbx, obj..)?

  • @ScibbieGames
    @ScibbieGames 9 месяцев назад +2

    12:25 to be fair, you can do with much less if you don't require as high a quality. Also it's pretty likely a lot of speed can be traded for lower VRAM memory usage.
    To train to a reasonable 7000 "Iterations" you could probably get away with way less VRAM, and according to their own calculations it should be possible to train to reference paper quality with just 8 GB of VRAM, but that hasn't been implemented.

    • @vitordelima
      @vitordelima 9 месяцев назад +1

      And the sphere harmonics seem to be overkill for something that is just doing specular reflections most times.

  • @eddiewalpole
    @eddiewalpole 9 месяцев назад +1

    To answer the question posed in the thumbnail: unlikely

  • @jonvdveen
    @jonvdveen 8 месяцев назад

    In a way, Gaussian Splats are like very large atoms - they come together to make everything in the scene.

  • @tombruckner2556
    @tombruckner2556 9 месяцев назад

    Now I just need a Unity iOS plugin to create these models in real-time :)

  • @R1po
    @R1po 8 месяцев назад

    Can't really imagine a use in games. But for VR chatrooms, real estate buros, or VR sightseeing.

  • @synapse349
    @synapse349 9 месяцев назад

    i wonder if one could use nerf to build the point cloud to use for splatting...

  • @juanme555
    @juanme555 9 месяцев назад

    It looks very interesting, it will be very interesting to see if game developers choose to invest the time needed to properly fake photorealism through Gaussian Splatting, achieving high fidelity very at a very low processing power cost, or they will rather just use all the path tracing methods which will be a lot more convenient but also a lot heavier on the user's hardware.

  • @jimj2683
    @jimj2683 8 месяцев назад

    This is the future of Google Street View in 3d!

  • @megasupernewbie
    @megasupernewbie 9 месяцев назад

    just waiting for this to become standard monitor tech

  • @whtiequillBj
    @whtiequillBj 9 месяцев назад

    This reminds me of the system that is used in Dreams by Media Molecule.
    I know that its Playstation specific but, maybe if there is enough push we can get Dreams onto the PC eventually.

  • @MylezNevison
    @MylezNevison 9 месяцев назад +1

    Can you say Splats are kinda like multi shaped 3 dimensional pixels that are doing reverse virtual 3D pixel mappings (or 3D pixel projections) based on photographic data?

  • @diligencehumility6971
    @diligencehumility6971 9 месяцев назад +2

    We are 100% gonna see something along these lines for future rendering in games

    • @BadBanana
      @BadBanana 9 месяцев назад +2

      No we're not.
      For movies yes
      For presentations yes
      Not for games
      Rendering like this is the opposite of why we have render pipelines
      It's unfeasible to ask a user to download hundreds of gigabytes of data per scene
      If you want to create gameobjects from these procures then im sure that can be done.
      But no
      You won't see games made like this ever

    • @eddiewalpole
      @eddiewalpole 9 месяцев назад

      I’d give it 1% tops for games specifically

  • @nocultist7050
    @nocultist7050 9 месяцев назад

    I just want to use it on non-denoised raytracing rendering output frames. with depth pass for spatial data. Just let me see if it works...

  • @BIFLI
    @BIFLI 9 месяцев назад +1

    Has anyone done this with the Zapruder film yet?

  • @cygnos4612
    @cygnos4612 8 месяцев назад

    I use the Luma AI plugin for unreal engine. Super easy to use and free.😊

  • @NunSuperior
    @NunSuperior 9 месяцев назад +1

    Novalogic omg. That's a name I haven't heard in a long time.

  • @oleglinkov
    @oleglinkov 9 месяцев назад +5

    nah, without animation and adjustable lighting (day/night, dynamic lights) nobody going to change their entire pipeline. not in gamedev anyway. cool thing for virtual museums, home tours etc

    • @gamefromscratch
      @gamefromscratch 9 месяцев назад +2

      I think there is enough data that lighting could be implemented. That said, to mix it in with a traditional rendering pipeline, you'd end up with two lighting paths and that wouldn't be ideal.

    • @euden_yt
      @euden_yt 9 месяцев назад +1

      This came out like 3 months ago, there’s also an experiment that showed that you CAN change the lighting. I’ve also seen someone implement animations in augmented reality and displayed on an iPhone. That’s just in 3 months of progress. Think of GPT 3 in 2020 vs GPT 3.5 2022 vs GPT 4 today.

    • @shlokbhakta2893
      @shlokbhakta2893 9 месяцев назад

      @@euden_ytand just like that 4D Gaussian splatting dropped lol

  • @RedstoneNinja99
    @RedstoneNinja99 9 месяцев назад +2

    I wonder if you could map a scene even faster by just taping a high framerate 3d camera to a pole on your back

    • @NeoShameMan
      @NeoShameMan 9 месяцев назад +1

      A 360 camera is enough

  • @Gigacat2137
    @Gigacat2137 9 месяцев назад

    That reminds me of how the models work in Dreams.

  • @goodideas5659
    @goodideas5659 2 месяца назад

    Possibly a similar idea for use in gaming but a lot simpler is to get rid high polygon models for a basic box outline shape instead and just update the quads (2 triangles) on each face with angle adjusted photos depending on the players view angle. This has been done recently in the new Ultra Engine but I would use a a more perfected method of this so you don't see any clipping between photos and is totally smooth. Maybe its as simple as making a detailed sprite sheet and a shader to smoothly combine and move between images...?
    If an view direction is between 2 image angles then get the shader to create the correct image based on the 2 closest ones...I think some smart people could achieve this.

  • @VideaVice25
    @VideaVice25 9 месяцев назад

    It's looking great but it also means things I rather see die will survive and look even better in the future.
    Just imagine Metaverse+Nanites+Gaussian Splats+VR... Hell awaits.

  • @HasanRx7
    @HasanRx7 9 месяцев назад +3

    I wonder if this tech could be used as a reference template in 3D modeling programs to model real world environments instead of using 2D reference images. It will be extremely useful to model on top of it since it gives real world scale and helps prototyping the basic shapes and scale of the environment. I'm not keeping up with 3D tech for modeling lately so I'm not sure if there is already a similar solution out there.

    • @steven11101010
      @steven11101010 9 месяцев назад +1

      I think this is the more realistic use case - as a tool to improve current workflows.

    • @doomgb4994
      @doomgb4994 9 месяцев назад +1

      I'm rooted for this kind of usage as well. don't want the tech deprive my fun of modeling

    • @mitch9254
      @mitch9254 9 месяцев назад +3

      Sim racing and golf games, for example, have been doing this for at least a decade: using laser scanned point clouds as 3d reference to make a polygonal version of a real world location.
      If these splat method ends up producing results at least as accurate as lidar, but substancially cheaper and faster, then surely studios and modders will be all in, but again to use it as a reference.

  • @hanniffydinn6019
    @hanniffydinn6019 9 месяцев назад +1

    Folks the real use here is cinematographers using real life backgrounds in unreal volume, as the film “the creator” proves is real life backgrounds are what really matters. This allows real life backgrounds to be used in volume filmmaking! Real-time NERF backgrounds is the future of volume filmmaking! 🤯🤯🤯🤯😎😎😎😎👍👍👍👍

  • @namelessalias0007
    @namelessalias0007 9 месяцев назад

    Imagine combining this tech with what was used in gta 5 enhancing photorealism demo that came of a couple years ago...

  • @MonsterJuiced
    @MonsterJuiced 9 месяцев назад +2

    It's slow because it uses the particle system to render. It's the with the unreal engine one. Each splat is rendered using Niagara and Niagara, even on GPU, gets real slow when rendering point clouds or particles systems above 1mil points.

    • @vitordelima
      @vitordelima 9 месяцев назад +1

      The original demo uses screen space 2D rendering.

    • @MonsterJuiced
      @MonsterJuiced 9 месяцев назад

      @@vitordelima Original demo of what? Made by whom? If you watched the video the guy even confirms it's using the particle system to render the points and this scene is over 3million points. Why did you upvote yourself?

    • @JunglismVFX
      @JunglismVFX 9 месяцев назад

      I wonder can cache them though, I’ve had quite a few intense Niagara systems running in a level but I cached them and the worked well, this for video production. Not sure how it would in game dev

    • @vitordelima
      @vitordelima 9 месяцев назад

      @@MonsterJuiced Of the technology being explained in the video. I didn't, you are just insane.

  • @stonekase
    @stonekase 9 месяцев назад

    Finally

  • @MR3DDev
    @MR3DDev 9 месяцев назад +1

    Reason why this doesn't work for games (yet) is cause you can't clean it up, unless I am missing something this isn't geo and you will have a very small space.

  • @4.0.4
    @4.0.4 9 месяцев назад +2

    One thing I'm not sure I understand is if you could combine scenes, or how it handles reflections other than creating a mirror universe room. Like if you have two mirrors on the back of each other.

    • @ScibbieGames
      @ScibbieGames 9 месяцев назад +2

      It would show what was visible on the pictures it was 'trained' on.

    • @drdca8263
      @drdca8263 9 месяцев назад

      @@ScibbieGamesMakes sense, but, it does make me wonder: what does it do if you record a scene where there’s a mirror that isn’t against a wall, and (in the training footage) have the camera go around the mirror? How will the quality of reflections in this case compare to the quality of reflections when I’m the mirror is up against a wall and it can use the “treat the mirror like a portal” trick?

    • @NeoShameMan
      @NeoShameMan 9 месяцев назад

      ​@@drdca8263the gaussian has directional color, so backface mirror will probably duplicate the color of the background where they are not seen. But that's a mice observation. Remember these don't represent surface and volume but light rays.

    • @drdca8263
      @drdca8263 9 месяцев назад

      @@NeoShameMan Sorry, I don’t think I understand quite what you mean by “duplicate the color of the background where they are not seen”.
      ... also, in order to represent occlusion, don’t the splats kinda sorta also have to represent volumes? That’s what the opacity value handles, isn’t it?
      Edit: to be clear, I do anticipate it still working somewhat when it can’t make things such that you can “go into the mirror” on account of there being views in the training footage at the locations that “going into the mirror” would take you, and which don’t look like what the mirror world would look like there,
      I’m just expecting that the quality would probably be somewhat lower, and wondering by how much.

    • @NeoShameMan
      @NeoShameMan 9 месяцев назад

      @@drdca8263 they use spherical harmonics, ie directional colors. These are used in game with lightprobe to illuminate a scene. They don't represent volume, they represent light rays from the source image, basically its like they are fuzzy blurry cubemap, the overlap of all them reconstruct the view of a source image. It's like you took the 2d pixels of the source image and move it to where it's the most probable, and many pixel at the same place merged into a cubemap.

  • @prozacgodgamedev
    @prozacgodgamedev 9 месяцев назад

    I think a really interesting real-world use case would google maps, google maps is already kinda terrible up close... so they can't make it worse! haha - but no they already have a number of photos and it would probably work 10 times better for lots of scenes, it's "just a data processing issue" ... ish ;)

  • @brodriguez11000
    @brodriguez11000 9 месяцев назад

    Hu-po has a two and a half hour YT video on all the details.

  • @insertoyouroemail
    @insertoyouroemail 9 месяцев назад

    It might be useful asset bake target.

  • @zodchiy3d
    @zodchiy3d 9 месяцев назад

    Has anyone tried running Unity in VR mode with this plugin? In theory it should work. I'll have to give it a try.

  • @impheris
    @impheris 9 месяцев назад

    Thanks to Aras for his contribution...

  • @sunbleachedangel
    @sunbleachedangel 9 месяцев назад

    You can make a cool psychedelic game with this

  • @claudiusraphael9423
    @claudiusraphael9423 9 месяцев назад

    "It has a price-tag of a $137.43." -- "Yeah, we'll not be using that today ..."

  • @linuxrant
    @linuxrant 9 месяцев назад

    If that would be available in Godot, I would immediately implement it in my project I have at least two really cool ideas how to use this tech. I wonder if the splatting could be switched from gaussian into other methods of splatting, for example...paint brush splatting... ifkywim...

  • @astrahcat1212
    @astrahcat1212 9 месяцев назад +4

    All we gotta do is kick down those polys and make it able to make stylized non-photo-realistic 3d models and we'll be good.

  • @leeoiou7295
    @leeoiou7295 9 месяцев назад

    How does this handle collisions?

  • @jerkofalltrades
    @jerkofalltrades 9 месяцев назад +1

    I was wondering, since these are just pictures that are used to create the point cloud data, why couldn't you set up a scene or model with many cameras (or just one that flies around the scene like a drone) and use that data to create the point cloud. Would it be economical? I don't know. I'm just curious if it's possible. Like would there be a noticeable difference in data sizes between a gaussian splat model vs traditional polygons.
    Also, didn't the Sony game Dreams do something similar?

    • @kayobro1234
      @kayobro1234 9 месяцев назад

      It's being done: ruclips.net/video/KriGDLvGDZI/видео.html

    • @jmalmsten
      @jmalmsten 9 месяцев назад

      Not sure about the Sony Dreams. But I do have hazy memories reading that the Blade Runner point and click adventure did something similar.

  • @Homiloko2
    @Homiloko2 9 месяцев назад +4

    Is it even possible to apply lighting on this? I don't understand much from photogrammetry but it seems pretty much imutable, e.g. can't move objects, can't alter lighting and so on, which greatly reduces the applications for this

    • @gamefromscratch
      @gamefromscratch 9 месяцев назад +3

      Yes and no I believe.
      Yes, in that you have positional and color data and it's being rendered in real-time by the GPU. You could certainly implement virtual lighting (assuming you are much much much better at math than I am).
      No, in a traditional pipeline, like in Unity. This isn't rendered along side the rest of the scene as I understand it, more in parallel. So if you added a light in the Unity scene, nothing would happen to the Gaussian Splat you've imported. I do think you could make it work, but you'd essentially have two parallel lighting paths (I think).

    • @ScibbieGames
      @ScibbieGames 9 месяцев назад +4

      The colors and reflections are stored inside "Spherical Harmonics", they hold the color value when looked at from different angles.
      You could technically, probably, somehow, bake in the lighting from your scene into these harmonics, but that would still be immutable.
      To do that in real time, for some million points in space, It might be a bit much, perhaps you'd require some sort of fragment shader, but for splats. lol

    • @vitordelima
      @vitordelima 9 месяцев назад

      @@ScibbieGames Surfels, which are similar to this, were used for realtime global illumination over triangles in the past. Still it would require the use of lighting probes around clusters of splats or other simplification.

  • @developerdeveloper67
    @developerdeveloper67 9 месяцев назад

    I can't see this being used in games for anything but hyper casual games on the PC (which there are not many out there). Maybe in fighting games due to having very few meshes being rendered? But even there, I think the performance is not there.

  • @MFKitten
    @MFKitten 9 месяцев назад +1

    I can't wait for someone to figure out how to make interactive games with this stuff.

  • @WolfCatalyst
    @WolfCatalyst 9 месяцев назад +2

    Bro, your computer chugs on everything. Most people can probably get 60fps 😂
    UNF did a tutorial a while back called make a full game in 2 hrs (or something similar) and he used one of the free monthly cities. He was chugging too but then went into the properties of the UE editor, changed a couple settings and it was perfect. Don't remember what he changed, but there's definitely a fix.
    Looks like I found a new use for my drone though. Thanks!

    • @ScibbieGames
      @ScibbieGames 9 месяцев назад +4

      It's an unoptimized, experimental renderer for a generally unsupported rendering pipeline which is implemented on top of Unity.

    • @atsignsarestupid
      @atsignsarestupid 9 месяцев назад

      He probably changed "virtual shadow maps" to regular shadow maps, that's a one-click triple the speed of Unreal engine button right there.

  • @JonoSSD
    @JonoSSD 9 месяцев назад +15

    I've been reading about this for a while and it looks like real innovation. Unlike real time ray tracing, something extremely taxing on the GPU and that was pretty much forced down our throats by a company out of ideas on how to charge thousands of dollars for their products that aren't worth half of their asking price in real performance.

    • @poetryflynn3712
      @poetryflynn3712 9 месяцев назад +10

      Raytracing actually came from independent scientists in the 80s, we just needed the fire power to catch up for the consumer.

    • @JonoSSD
      @JonoSSD 9 месяцев назад

      @@poetryflynn3712 I know, I even read a few of the scientific papers about it. It's really interesting stuff.
      The problem was Nvidia forcing the technology onto the consumer well before it was ready as this "crazy new thing that totally makes new GPUs worth double last gen" even though we're now 3 generations in and most of their lineup can't even handle it without upscaling (which was essentially invented because games could barely reach 60 fps on flagships when using ray tracing at the time. Remember the RTX 20 series?).
      I'd say it'll be some 4~5 more generations before real time ray tracing can become viable without upscaling.
      Nvidia (and AMD, which does very little besides copy its competitor) should be focusing working more closely with developers to better optimize current games, so they don't suck up 32 gigs of ram all the time and need 200 GB of storage to run.
      But no, that's not flashy enough to sell thousand dollar giant pieces of inefficient heatsinks.

    • @jmvr
      @jmvr 9 месяцев назад

      @@poetryflynn3712 and Gaussian splatting is in a similar boat, except that the firepower has already been there for a while, just that it wasn't used for consumer applications until just now. Usually it was used for mapping out stuff like CT scans, and the original paper from 1993 ( web.cse.ohio-state.edu/~crawfis.3/Publications/Textured_Splats93.pdf ) used it to map out wind and clouds

    • @philbob9638
      @philbob9638 9 месяцев назад +5

      Real time ray tracing is not something that was forced down your throat by a company out of ideas, it's a technology that has been promised and pursued for decades and will continue to be pursued for a while yet. What you have now is barely scratching the surface.

    • @vitordelima
      @vitordelima 9 месяцев назад +2

      @@philbob9638Then hardware accelerated realtime raytracing was forced down everyone's throats by a company out of ideas.

  • @bigali69190
    @bigali69190 9 месяцев назад

    "Splat" is the future.

  • @skeleton_craftGaming
    @skeleton_craftGaming 9 месяцев назад +1

    I want to see this in a program like blender... Because even though the real time aspect of this is necessarily necessary having the meshing not take literal hours would be nice.

  • @daysetx
    @daysetx 9 месяцев назад

    Is this something like an Unlimited Detail engine?

  • @SnakeEngine
    @SnakeEngine 9 месяцев назад +1

    I don't see it go anywhere for games, just like "unlimited detail" stuff.

  • @mmmuck
    @mmmuck 9 месяцев назад

    I just want a tool to convert these to polygonal mesh and texture

  • @VitorGuerreiroVideos
    @VitorGuerreiroVideos 9 месяцев назад

    Apart from back plates/matte paintings/Skyboxes, I don't really see much use to it in games? It's static, no collisions, no interaction, etc. ? I'm not sure but at least it seems like it's a one off, static, just one use type of thing?

    • @vitordelima
      @vitordelima 9 месяцев назад +2

      It doesn't seem to be too hard to implement collisions because it's almost a collection of elliptical particles.

    • @steven11101010
      @steven11101010 9 месяцев назад

      Even easier just to rough out the outline via an invisible mesh and use that for collision detection. The main issue with interactions would be lighting and destruction of the model, those interaction would have a much harder time looking "right". @@vitordelima

  • @Seacat17
    @Seacat17 9 месяцев назад

    Great. But what about FPS future?