New Cattery Models! Inpaint Anything with LaMa - Nuke

Поделиться
HTML-код
  • Опубликовано: 4 окт 2024

Комментарии • 24

  • @jtsanborn1324
    @jtsanborn1324 3 месяца назад +1

    Im really really happy with the stuff you show us here, this is the kind of content me and many are looking for! In this case with this new lama node, would be interesting to compare instead of NNCleanup, it is so powerful and have solved me some things that otherwise would take days and weeks, even on moving footage instead of single frame.
    Thanks Alex, this is great!

    • @alexvillabon
      @alexvillabon  3 месяца назад +1

      Thanks for the kind words, I'm happy you are finding value in my videos so far!
      I had never heard of NNCleanup before. I downloaded the demo version and it seems to be very good in most instances but because I don't have a license it adds very heavy grain/watermark over the images so it makes it hard to judge properly, let alone in motion. If I get a license or a proper trial I'll do a video where I compare results. Thanks for watching and for pointing me in the direction of this tool!

  • @RichardServello
    @RichardServello 2 месяца назад

    That LaMa result is very useable as a first pass. Much better than photoshop generative fill.

  • @Osvaldsson
    @Osvaldsson 3 месяца назад +1

    Great vid Alex! Pretty soon I’m not going to be able to keep all these names straight. LaMa, ABME, MiDaS, RAFT, TecoGAN…

    • @alexvillabon
      @alexvillabon  3 месяца назад +1

      Ha! Absolutely, add to that the endless amount of comfyui models and loras… its tough to keep track!

  • @samueljrgensen417
    @samueljrgensen417 3 месяца назад +3

    another great tutorial Senor Villabon!

  • @kietzi
    @kietzi 2 месяца назад

    i see this in beauty retouches

  • @VFXforfilm
    @VFXforfilm 2 месяца назад

    RTX 4070 TI, LaMa completely crashed my computer when I looked through the node with mask added.

    • @alexvillabon
      @alexvillabon  2 месяца назад

      Oh man, thats unfortunate. It probably hit the memory limit of your gpu. It’s strange that it crashed your whole computer though. Maybe try again or send a report to the foundry.

  • @wix001HD
    @wix001HD 3 месяца назад +1

    Compared to solutions like fooocus, generative fill and etc. it looks relatively outdated and very rough, almost useless in terms of quality as for image inpaint (as expected this tool was built on 2021 research). Does it have some benefits in terms of sequence inpaint or you haven't tested yet?

    • @alexvillabon
      @alexvillabon  3 месяца назад +1

      You are right, this is by no means the most advanced solution out there... not even close. The advantage here is the fact that you don't have to leave nuke and it is just one more tool/option in a comper's arsenal. I worked at one of the large studios for almost a decade in both film and commercials and I know for a fact that you dont get access to photoshop most times, let alone stuff like comfyui / stable diffusion like tools. As for temporal consistency, the foundry's website states: "LaMa is not temporally consistent, in the example video smart vectors were used to propagate the in-painted area."

  • @RichardServello
    @RichardServello 2 месяца назад

    For something like the runner it would work better if it could work temporally.

    • @alexvillabon
      @alexvillabon  2 месяца назад

      That’s the natural evolution. So far no model does that but im sure its a matter of time.

  • @glennteel1461
    @glennteel1461 2 месяца назад

    Awesome video Alex!

  • @NickPittas
    @NickPittas 3 месяца назад

    Have you tried using the inpaint first and then add the Lama? In ComfyUI we use it for inpainting that way. Maybe it makes a cleaner plate

    • @alexvillabon
      @alexvillabon  3 месяца назад +1

      Interesting thought. Unfortunately, I just tested it and it gives the exact same result.

    • @NickPittas
      @NickPittas 3 месяца назад

      @@alexvillabon Yeah I also tested it as soon as I wrote the comment, and it seems to not respect the contents of the mask, only the surrounding areas. It could still be usefull to export the mask and video to comfyUI and test it with the inpainting there with crop and stich nodes. Get a better quality maybe for large scale matte paintings and cleanups. And maybe AnimateDiff could help with the temporal consistency even.

  • @iamimpress
    @iamimpress 3 месяца назад +1

    Is Apple Silicon processor needed as well? I see it mentioned on the download page but it's not mentioned here in the video. Thanks!

    • @iamimpress
      @iamimpress 3 месяца назад

      Nevermind - answered my own question - CRASH! haha

    • @alexvillabon
      @alexvillabon  2 месяца назад

      :(

  • @src1903
    @src1903 3 месяца назад

    I was realy wonder this ai model.I think seems a lot not practical.Maybe I will use some easy shots.Thaks for the video.

  • @ninapower5141
    @ninapower5141 19 дней назад +1

    This is not a tutorial. You didn't explain anything of your workflow. You're just pandering to an audience of Nuke experts who don't need this video anyways.