Gaussian Splatting Has Never Been Easier!

Поделиться
HTML-код

Комментарии •

  • @danieldimarchi7479
    @danieldimarchi7479 4 месяца назад +2

    Struggling with this for awhile, you helped majorly dude. Thanks!

  • @jmr2008jan
    @jmr2008jan 3 месяца назад

    It would be pretty neat to have a reference library of these available online through a web 3d app.

  • @zerosaturn416
    @zerosaturn416 5 месяцев назад +2

    thank you so much for this tutorial , for months i have been trying to find a simple program to train gaussian splats locally but all of them never seemed to work because they were to advanced or i would get errors.

    • @levelupvfx
      @levelupvfx  5 месяцев назад +1

      Of course! Happy to help, that’s exactly why I wanted to make this tutorial!

  • @carlossuarez9272
    @carlossuarez9272 6 месяцев назад +3

    Today I have seen a lot of content around this topic. It is a technology with great potential. I have tried KIRI Engine, Luma IA and Postshot and by far the latter gives me better results in Unreal Engine. The model is better rendered. I suppose it is because locally I have more control. I did notice that the model lost quality when using it in Unreal Engine but I didn't know why until I heard your explanation of the limitations of Niagara. At the moment I'm training a model of a Castle based on a 360° aerial video that I found on RUclips for my game. Once I've the final result, I'll share my results here. Thanks for all the breakdown.

  • @ArchitRege
    @ArchitRege 4 месяца назад +1

    Thanks a lot for the indepth walk through

  • @TheWingEmpire
    @TheWingEmpire 6 месяцев назад +1

    this is amazing man!! good job

  • @TheBadBone23
    @TheBadBone23 3 месяца назад +1

    Can you somehow use this as a 3D mesh? Something like replacing 3D scanning with this method...scan an object and 3D model something around it

  • @nbms950
    @nbms950 3 месяца назад

    Hey thanks for the tutorial, really concise. Do you happen to know if you can then export the PLY out of Unreal as a FBX or other 3D mesh file??

    • @Densmode3dp
      @Densmode3dp 2 месяца назад

      If you listen he says he exported in a .ply format

  • @sdsfa8337
    @sdsfa8337 5 месяцев назад +2

    Been using this programm for a while and I love using it with ue5, btw do you know how to import splats in blender with color atribute, cuz I do not see color atribute export setting in postshot:(

    • @levelupvfx
      @levelupvfx  5 месяцев назад

      Sadly I pretty quickly gave up when it came to Gaussian Splats in blender, so I only tested it with blender before I started using postshot, I think the color data should be be in the PLY file, but if not, I’m sure there’s a way to get it out seperately

    • @cedimogotes8662
      @cedimogotes8662 4 месяца назад

      @@levelupvfx how to get the color data to blender?

  • @twistedionstudios
    @twistedionstudios 8 дней назад

    Heads up... as of 12.20.24, the Luma AI plugin will not install with UE5.5. "No compatible engines" error.

  • @RogueBeatsARG
    @RogueBeatsARG 5 месяцев назад

    Damn 944 is so good looking

  • @gaussiansplatsss
    @gaussiansplatsss 6 месяцев назад +5

    is there a limit of uploading photos in postshot?

    • @levelupvfx
      @levelupvfx  6 месяцев назад +5

      There is a suggested limit on their documentation of 100 to 300, but since everything is local, you’re not actually uploading anything so you have no limit to how many images you can use.
      For example I’ve run splats using 1500 images, and I’ve run ones using a few hundred. In general, more images will help, but there deffinitly is a sharp falloff from where adding more images don’t add any more detail, they just slow the training down

  • @Dartheomus
    @Dartheomus 3 месяца назад

    This software is absolutely amazing, and I think it will only get better as AI progresses. I've found this software really doesn't like it when you miss an angle. So you assume it's going to know how to render something like this car if you walk around and then point down on top. However, if you then try to look at the car from a low angle, the entire model breaks up. Also, and more frustrating is the fact that there is a huge resolution hit. You can feed it really high quality video, and what you get back looks like 1/10th the resolution if that. I'm hoping that can be addressed soon. Finally, I really wish there was a streamline way to rebuild these splats into 3d models. It would be really useful to couple this technology with 3d printing, but it's not very easy at the moment.

  • @Utsab_Giri
    @Utsab_Giri 5 месяцев назад +1

    When you say that it runs locally, does that mean it doesn't need to be connected to the internet?
    Thanks!

    • @levelupvfx
      @levelupvfx  5 месяцев назад +2

      Yes! Nothing you make is processed online, everything happens on your machine, I think you may need to be connected when you first start up because they need you to log in with your account, but after that you are good

  • @Strawberry_ZA
    @Strawberry_ZA 4 месяца назад

    awesome porsche!

  • @deniaq1843
    @deniaq1843 6 месяцев назад

    Thumbs up! :)

  • @ElliottK
    @ElliottK 6 месяцев назад +1

    Still no spherical harmonics in LUMA AI :(

    • @levelupvfx
      @levelupvfx  6 месяцев назад

      I know! I’m hoping they are able to find a way to get them working with Niagara, but it might be an engine limitation

  • @yvann.mp4
    @yvann.mp4 5 месяцев назад

    thanks a lot

  • @anoopak4928
    @anoopak4928 6 месяцев назад

    that Mamukkoya Meme lol 😄

  • @PGANANDHAKRISHNAN
    @PGANANDHAKRISHNAN 3 месяца назад

    dhe nammade mamukoya

  • @AndrewTSq
    @AndrewTSq 28 дней назад

    I will give it a pass cause the installer wants admin rights. But it looks cool!. Love the car, had one -82 a long time ago :)

  • @korujaa
    @korujaa 3 месяца назад +3

    there is NO aplication, just showing off

    • @3rdHalf1
      @3rdHalf1 28 дней назад

      There is. I work at a metal working company that designs and make automatic wood processing lines. After assembly, before shipping, we take hundreds of photos and videos, that we use as reference for customer sevice, documentation and reference for future projects. I have tried regular 3d laser scaning and phogrametry, but these options doesn’t really work well with large/complex instalations. Laser is impractical because of computation that is required and generated mesh files from photogrametry are ugly and file sizes are ludacris. Just recently I tried gausian splatting and it was just what we needed. In future we are planing to upload gausian splats of our projects to website, because unlike complex meshes gausian splats can be viewed in web browser and smartphones.

  • @AlexTuduran
    @AlexTuduran 5 месяцев назад +1

    Of course they can cast shadows. It's just not coded yet.

    • @levelupvfx
      @levelupvfx  5 месяцев назад +2

      Deffinitly let me know if you have a way to get shadows working! Currently on the Luma AI plugin documentation they claim “Shadows are not supported in Gaussian Splatting scenes” I figured it was a limitation of them using sprites in thier niagra system, which would make it rather difficult to make an accurate shadow. but if there’s a simple coding fix or something that makes them able to, that would be awesome

    • @AlexTuduran
      @AlexTuduran 5 месяцев назад

      @@levelupvfx It's not a simple coding. You'd have to capture the depth buffer from light's perspective, in the shader that renders the actual splat rendering you'd have to compute the fragment's position in light's space, make the comparison between the depth buffer and the distance to light and decide if the fragment is lit or not. And that's just the basic approach, but since the splats are puffy, additional shadow filtering techniques would have to be employed in order to produce a smooth shadow. Or implement volumetric light scattering where the splats could be interpreted as cloud density and also have self-shadowing. There ar multiple ways, but it's definitely possible. I was kind of expecting that since Unreal supports lit particles, that would kind of work automatically.

  • @feel_in_fine
    @feel_in_fine 2 месяца назад

    this doesnt seem photoreal tho

    • @levelupvfx
      @levelupvfx  2 месяца назад +2

      Agreed, while gaussian splatting looks great in a proper renderer like PostShot, there are still tons of limitations to rendering them with unreal. Given this is a super new technology though, I have high hopes for constant improvements to be made to the point where gaussian splats can look just as good in unreal as they do elsewhere! (In all honesty I should have also used my Supra Gaussian splat as the example here, as that one turned out much cleaner, but I wanted to try out a new one for this video, in general as long as you have enough capture points, I still feel Gaussian splatting is second to none at the moment in terms of creating a photoreal result)

  • @Patheticbutharmless
    @Patheticbutharmless 4 месяца назад +1

    To be honest, I don't see the benefit, for me, on photogrammetry. The wireframe is, likely, still a big mess. There is nothing much you can do with it. Professionaly. Yet.
    Since the method cannot understand what kind of surfaces it is capturing everything has this very bland, very uniform self iluminated look.
    How to give areas different types of roughness or, for example, metallic values ect? It isn't possible.
    Seperating parts of the mesh will look awful with lots and lots of jagged edges, smoothing these out will take about forever.
    Trying to force any kind of remeshing or whatever will distort everything beyond recogination I imagine unless the face count is 50 million upwards.
    At least for simulated enviornments you can't really mix photogrammetry(or this) well with modeled 3d objects because they will not "mesh"(pun by accident). It's either fully modeled or fully captured.(Ok I have to correct this, in a brightly lit outside enviornment they can be ok looking but, so far, because you don't have to delight them. Personally I have always need to retexture objects with the captured diffuse as a starting off point.
    Without corrections it just doesn't hold up. It just always looks way out of place.
    There is so much more to a object than its mere shape and basic color value. We get a lot of information about something by the types of reflections and refractions from a object that HAVE to be simulated via the information a model surface provides for the renderer.
    In a few years, when some ai will know what the object is, after the capture process and understands what color area corresponds to what type of surface(basic, a painted car hood with rusted patches on it, or a won out leather jacket ect), I will look at this again.