Gaussian Splatting Has Never Been Easier!

Поделиться
HTML-код
  • Опубликовано: 6 окт 2024

Комментарии • 34

  • @danieldimarchi7479
    @danieldimarchi7479 Месяц назад +1

    Struggling with this for awhile, you helped majorly dude. Thanks!

  • @carlossuarez9272
    @carlossuarez9272 3 месяца назад +2

    Today I have seen a lot of content around this topic. It is a technology with great potential. I have tried KIRI Engine, Luma IA and Postshot and by far the latter gives me better results in Unreal Engine. The model is better rendered. I suppose it is because locally I have more control. I did notice that the model lost quality when using it in Unreal Engine but I didn't know why until I heard your explanation of the limitations of Niagara. At the moment I'm training a model of a Castle based on a 360° aerial video that I found on RUclips for my game. Once I've the final result, I'll share my results here. Thanks for all the breakdown.

  • @jmr2008jan
    @jmr2008jan Месяц назад

    It would be pretty neat to have a reference library of these available online through a web 3d app.

  • @ArchitRege
    @ArchitRege Месяц назад +1

    Thanks a lot for the indepth walk through

  • @TheWingEmpire
    @TheWingEmpire 3 месяца назад +1

    this is amazing man!! good job

  • @Dartheomus
    @Dartheomus Месяц назад

    This software is absolutely amazing, and I think it will only get better as AI progresses. I've found this software really doesn't like it when you miss an angle. So you assume it's going to know how to render something like this car if you walk around and then point down on top. However, if you then try to look at the car from a low angle, the entire model breaks up. Also, and more frustrating is the fact that there is a huge resolution hit. You can feed it really high quality video, and what you get back looks like 1/10th the resolution if that. I'm hoping that can be addressed soon. Finally, I really wish there was a streamline way to rebuild these splats into 3d models. It would be really useful to couple this technology with 3d printing, but it's not very easy at the moment.

  • @zerosaturn416
    @zerosaturn416 3 месяца назад +1

    thank you so much for this tutorial , for months i have been trying to find a simple program to train gaussian splats locally but all of them never seemed to work because they were to advanced or i would get errors.

    • @levelupvfx
      @levelupvfx  3 месяца назад

      Of course! Happy to help, that’s exactly why I wanted to make this tutorial!

  • @RogueBeatsARG
    @RogueBeatsARG 2 месяца назад

    Damn 944 is so good looking

  • @TheBadBone23
    @TheBadBone23 28 дней назад +1

    Can you somehow use this as a 3D mesh? Something like replacing 3D scanning with this method...scan an object and 3D model something around it

  • @nbms950
    @nbms950 26 дней назад

    Hey thanks for the tutorial, really concise. Do you happen to know if you can then export the PLY out of Unreal as a FBX or other 3D mesh file??

    • @Densmode3dp
      @Densmode3dp 4 дня назад

      If you listen he says he exported in a .ply format

  • @Strawberry_ZA
    @Strawberry_ZA Месяц назад

    awesome porsche!

  • @yvann.mp4
    @yvann.mp4 2 месяца назад

    thanks a lot

  • @gaussiansplatsss
    @gaussiansplatsss 3 месяца назад +3

    is there a limit of uploading photos in postshot?

    • @levelupvfx
      @levelupvfx  3 месяца назад +3

      There is a suggested limit on their documentation of 100 to 300, but since everything is local, you’re not actually uploading anything so you have no limit to how many images you can use.
      For example I’ve run splats using 1500 images, and I’ve run ones using a few hundred. In general, more images will help, but there deffinitly is a sharp falloff from where adding more images don’t add any more detail, they just slow the training down

  • @anoopak4928
    @anoopak4928 3 месяца назад

    that Mamukkoya Meme lol 😄

  • @deniaq1843
    @deniaq1843 3 месяца назад

    Thumbs up! :)

  • @korujaa
    @korujaa Месяц назад +1

    there is NO aplication, just showing off

  • @sdsfa8337
    @sdsfa8337 2 месяца назад

    Been using this programm for a while and I love using it with ue5, btw do you know how to import splats in blender with color atribute, cuz I do not see color atribute export setting in postshot:(

    • @levelupvfx
      @levelupvfx  2 месяца назад

      Sadly I pretty quickly gave up when it came to Gaussian Splats in blender, so I only tested it with blender before I started using postshot, I think the color data should be be in the PLY file, but if not, I’m sure there’s a way to get it out seperately

    • @cedimogotes8662
      @cedimogotes8662 Месяц назад

      @@levelupvfx how to get the color data to blender?

  • @Utsab_Giri
    @Utsab_Giri 3 месяца назад

    When you say that it runs locally, does that mean it doesn't need to be connected to the internet?
    Thanks!

    • @levelupvfx
      @levelupvfx  3 месяца назад +1

      Yes! Nothing you make is processed online, everything happens on your machine, I think you may need to be connected when you first start up because they need you to log in with your account, but after that you are good

  • @PGANANDHAKRISHNAN
    @PGANANDHAKRISHNAN 28 дней назад

    dhe nammade mamukoya

  • @ElliottK
    @ElliottK 3 месяца назад

    Still no spherical harmonics in LUMA AI :(

    • @levelupvfx
      @levelupvfx  3 месяца назад

      I know! I’m hoping they are able to find a way to get them working with Niagara, but it might be an engine limitation

  • @AlexTuduran
    @AlexTuduran 2 месяца назад +1

    Of course they can cast shadows. It's just not coded yet.

    • @levelupvfx
      @levelupvfx  2 месяца назад +2

      Deffinitly let me know if you have a way to get shadows working! Currently on the Luma AI plugin documentation they claim “Shadows are not supported in Gaussian Splatting scenes” I figured it was a limitation of them using sprites in thier niagra system, which would make it rather difficult to make an accurate shadow. but if there’s a simple coding fix or something that makes them able to, that would be awesome

    • @AlexTuduran
      @AlexTuduran 2 месяца назад

      @@levelupvfx It's not a simple coding. You'd have to capture the depth buffer from light's perspective, in the shader that renders the actual splat rendering you'd have to compute the fragment's position in light's space, make the comparison between the depth buffer and the distance to light and decide if the fragment is lit or not. And that's just the basic approach, but since the splats are puffy, additional shadow filtering techniques would have to be employed in order to produce a smooth shadow. Or implement volumetric light scattering where the splats could be interpreted as cloud density and also have self-shadowing. There ar multiple ways, but it's definitely possible. I was kind of expecting that since Unreal supports lit particles, that would kind of work automatically.

  • @Patheticbutharmless
    @Patheticbutharmless Месяц назад +1

    To be honest, I don't see the benefit, for me, on photogrammetry. The wireframe is, likely, still a big mess. There is nothing much you can do with it. Professionaly. Yet.
    Since the method cannot understand what kind of surfaces it is capturing everything has this very bland, very uniform self iluminated look.
    How to give areas different types of roughness or, for example, metallic values ect? It isn't possible.
    Seperating parts of the mesh will look awful with lots and lots of jagged edges, smoothing these out will take about forever.
    Trying to force any kind of remeshing or whatever will distort everything beyond recogination I imagine unless the face count is 50 million upwards.
    At least for simulated enviornments you can't really mix photogrammetry(or this) well with modeled 3d objects because they will not "mesh"(pun by accident). It's either fully modeled or fully captured.(Ok I have to correct this, in a brightly lit outside enviornment they can be ok looking but, so far, because you don't have to delight them. Personally I have always need to retexture objects with the captured diffuse as a starting off point.
    Without corrections it just doesn't hold up. It just always looks way out of place.
    There is so much more to a object than its mere shape and basic color value. We get a lot of information about something by the types of reflections and refractions from a object that HAVE to be simulated via the information a model surface provides for the renderer.
    In a few years, when some ai will know what the object is, after the capture process and understands what color area corresponds to what type of surface(basic, a painted car hood with rusted patches on it, or a won out leather jacket ect), I will look at this again.