Simple Outpainting With ComfyUI

Поделиться
HTML-код
  • Опубликовано: 23 сен 2024

Комментарии • 35

  • @pookexvi4998
    @pookexvi4998 7 месяцев назад +8

    ive been trying to add out-painting to my work flow for a while. i followed your video exactly. even matching values, im still getting basically a solid color boarder around where the padding is.

    • @luukknipper0
      @luukknipper0 5 месяцев назад

      Increase the grow_mask_by value in the VAE Encode (For Inpainting), fixed it for me at least.

    • @roachey
      @roachey 3 месяца назад

      Idk what happened this used to work me but re-following the video I get the same as you described.

    • @Latent_Diffusion
      @Latent_Diffusion 3 месяца назад

      Try setting the denoise to 1. Anything less than that, is likely to give you that blank extension.

    • @MarvelSanya
      @MarvelSanya 20 дней назад

      any solution?

    • @darkie7379
      @darkie7379 9 дней назад

      @@Latent_Diffusion damn, thank you man

  • @HistoryIsAbsurd
    @HistoryIsAbsurd 7 месяцев назад +1

    My guy is on fire with the videos! Thanks again!

    • @PromptingPixels
      @PromptingPixels  7 месяцев назад

      So much to cover - never enough time! Thanks so much for sticking around and watching!

  • @vvip2u
    @vvip2u 6 месяцев назад +1

    Good Job! It helps me inference other images via new workflow

  • @iozsoo
    @iozsoo 7 месяцев назад +1

    Love your videos, thank you very much, keep up the good work!

    • @PromptingPixels
      @PromptingPixels  7 месяцев назад

      Thanks so much man - appreciate the kind words!

  • @ramywagdy6848
    @ramywagdy6848 7 месяцев назад +2

    What is the best model for outpainting

  • @Parsitube_yt
    @Parsitube_yt 7 месяцев назад

    Awesome man.

  • @tarbyk82
    @tarbyk82 7 месяцев назад +1

    Awesome channel! How is the experience with macOS and open source Ai? Even though i have the most standard setup (windows, intel cpu, nvidia gpu) i still sometimes run into some compatibility and general IT issues when i need to install some plugins, extension, additional Ai software, train models and so on. Does macOS suffer from these issues because it might be neglected by the devs or is it smooth sailing? :D
    And exactly which apple device are you using? I was thinking of using Mac Studio at work because of the insane amount of shared memory

    • @PromptingPixels
      @PromptingPixels  7 месяцев назад +1

      Hey - thank you so much for the kind words! Honestly, I think the Mac is worlds behind for stable diffusion models compared to a dedicated GPU. I started this channel on a MBP M1 and only 16GB RAM and quickly was hitting productivity issues when it came to testing.
      For 512x512 images and a little patience, it is definitely fine. SDXL resulted in memory errors. Batch processing was making it quite hot and a lengthy process. Any sort of AnimationDiff testing also became too long of a process.
      That made me try to find a creative solution that wasn't a cloud-based service. Not that I don't think they have their place (they absolutely do), but I just don't like seeing my money visually drain as I am just researching, testing, documenting, and sharing my results with everyone here.
      That led me to get an old PC and put in a RTX 3060. I access it remotely over the LAN and allows me to have much better performance than I could ever get with the MBP while still enjoying the Mac apps that complement my workflow. I just start ComfyUI/WebUI with the --listen flag as it gives me a port to remotely connect to.
      So I say all that as I am not sure if the Mac is the best platform at this time when it comes to SD models. My understanding is that even with Metal, the bandwidth of the memory isn't nearly as quick as PCIe. However, the text models like Mixtral8x7b may offer better performance per dollar compared to a GPU (assuming enough memory is available).
      I like to share this spreadsheet with others as I think its the most telling (docs.getgrist.com/3mjouqRSdkBY/sdperformance) its user reported speeds of stable diffusion based on various platforms, GPUs, etc.

    • @tarbyk82
      @tarbyk82 7 месяцев назад

      Thanks for the valuable info! @@PromptingPixels

    • @ghost-user559
      @ghost-user559 2 месяца назад

      Macs are by far the best where they excel and they are absolutely left behind in many areas. If you chose specific apps that cater to their neural engine, which generally requires converting the models to a different format, then they can actually run circles around a PC for Stable Diffusion image generation. Because the Neural engine acts like a separate set of cores, you get your ram plus the 16 cores set aside for machine learning applications. This means a 16 GB acts like 32 for those models. And the Vram on the unified architecture of the new chips counts the total Ram because it’s shared. So a 16 Gb Apple Silicon has 16 GB Vram, minus overhead for running applications. Where it falls behind is anything Cuda specific.

  • @FlowFidelity
    @FlowFidelity Месяц назад

    Helpful, what is your preferred inpainting model these days?

    • @PromptingPixels
      @PromptingPixels  Месяц назад

      Really loving soft inpainting as the results are more seamless. The checkpoint i use depends on the image being retouched (photographs use a realistic checkpoint, illustrations typically use a general or anime-based checkpoint, etc.)

  • @goshniiAI
    @goshniiAI 7 месяцев назад +1

    Is it possible to get a link to the inpainting checkpoint model used? (Dreamshaper Inpainting 8). Thank you for the demonstation

    • @PromptingPixels
      @PromptingPixels  7 месяцев назад +1

      Sure thing - here ya go: civitai.com/models/4384?modelVersionId=131004

    • @goshniiAI
      @goshniiAI 7 месяцев назад

      Thank you Lots @@PromptingPixels

  • @AIAnimationStudio
    @AIAnimationStudio 7 месяцев назад +1

    Great video.

  • @Sgummol
    @Sgummol 7 месяцев назад

    I have done the same as you but on the VAE Encode for inpainting node it gives me the error "The parameter is incorrect"

  • @erdmanai
    @erdmanai 7 месяцев назад +1

    Thank you man!
    outpainting works better in automatic (less artefacts and border line) isn't it?)

    • @PromptingPixels
      @PromptingPixels  7 месяцев назад

      Generally yes - I think auto1111 is better for modifying images than comfy. At some point will do a proper comparison here on the channel. But made the video for those folks who would rather keep everything contained in one interface.

  • @romarozhkov
    @romarozhkov 5 месяцев назад

    Thank you bro!

  • @XingjiLu
    @XingjiLu 6 месяцев назад

    Where to get that inpainting model ?

    • @PromptingPixels
      @PromptingPixels  6 месяцев назад

      civitai.com/models/4384?modelVersionId=131004

  • @gjsxnobody7534
    @gjsxnobody7534 7 месяцев назад +1

    Sorry? But title says outpainting, but you showed in painting example no?

    • @PromptingPixels
      @PromptingPixels  7 месяцев назад

      Example in the video is using the 'pad image for outpainting' node. I found that the inpainting checkpint (which is trained on partial images) performed better than the generation checkpoint. Always best to test for your use case on what provides the best results.

  • @mssuxmyass
    @mssuxmyass 7 месяцев назад +1

    Thank You!