ive been trying to add out-painting to my work flow for a while. i followed your video exactly. even matching values, im still getting basically a solid color boarder around where the padding is.
Awesome channel! How is the experience with macOS and open source Ai? Even though i have the most standard setup (windows, intel cpu, nvidia gpu) i still sometimes run into some compatibility and general IT issues when i need to install some plugins, extension, additional Ai software, train models and so on. Does macOS suffer from these issues because it might be neglected by the devs or is it smooth sailing? :D And exactly which apple device are you using? I was thinking of using Mac Studio at work because of the insane amount of shared memory
Hey - thank you so much for the kind words! Honestly, I think the Mac is worlds behind for stable diffusion models compared to a dedicated GPU. I started this channel on a MBP M1 and only 16GB RAM and quickly was hitting productivity issues when it came to testing. For 512x512 images and a little patience, it is definitely fine. SDXL resulted in memory errors. Batch processing was making it quite hot and a lengthy process. Any sort of AnimationDiff testing also became too long of a process. That made me try to find a creative solution that wasn't a cloud-based service. Not that I don't think they have their place (they absolutely do), but I just don't like seeing my money visually drain as I am just researching, testing, documenting, and sharing my results with everyone here. That led me to get an old PC and put in a RTX 3060. I access it remotely over the LAN and allows me to have much better performance than I could ever get with the MBP while still enjoying the Mac apps that complement my workflow. I just start ComfyUI/WebUI with the --listen flag as it gives me a port to remotely connect to. So I say all that as I am not sure if the Mac is the best platform at this time when it comes to SD models. My understanding is that even with Metal, the bandwidth of the memory isn't nearly as quick as PCIe. However, the text models like Mixtral8x7b may offer better performance per dollar compared to a GPU (assuming enough memory is available). I like to share this spreadsheet with others as I think its the most telling (docs.getgrist.com/3mjouqRSdkBY/sdperformance) its user reported speeds of stable diffusion based on various platforms, GPUs, etc.
Macs are by far the best where they excel and they are absolutely left behind in many areas. If you chose specific apps that cater to their neural engine, which generally requires converting the models to a different format, then they can actually run circles around a PC for Stable Diffusion image generation. Because the Neural engine acts like a separate set of cores, you get your ram plus the 16 cores set aside for machine learning applications. This means a 16 GB acts like 32 for those models. And the Vram on the unified architecture of the new chips counts the total Ram because it’s shared. So a 16 Gb Apple Silicon has 16 GB Vram, minus overhead for running applications. Where it falls behind is anything Cuda specific.
Really loving soft inpainting as the results are more seamless. The checkpoint i use depends on the image being retouched (photographs use a realistic checkpoint, illustrations typically use a general or anime-based checkpoint, etc.)
Generally yes - I think auto1111 is better for modifying images than comfy. At some point will do a proper comparison here on the channel. But made the video for those folks who would rather keep everything contained in one interface.
Example in the video is using the 'pad image for outpainting' node. I found that the inpainting checkpint (which is trained on partial images) performed better than the generation checkpoint. Always best to test for your use case on what provides the best results.
ive been trying to add out-painting to my work flow for a while. i followed your video exactly. even matching values, im still getting basically a solid color boarder around where the padding is.
Increase the grow_mask_by value in the VAE Encode (For Inpainting), fixed it for me at least.
Idk what happened this used to work me but re-following the video I get the same as you described.
Try setting the denoise to 1. Anything less than that, is likely to give you that blank extension.
any solution?
@@Latent_Diffusion damn, thank you man
My guy is on fire with the videos! Thanks again!
So much to cover - never enough time! Thanks so much for sticking around and watching!
Good Job! It helps me inference other images via new workflow
Love your videos, thank you very much, keep up the good work!
Thanks so much man - appreciate the kind words!
What is the best model for outpainting
Awesome man.
Thanks again!
Awesome channel! How is the experience with macOS and open source Ai? Even though i have the most standard setup (windows, intel cpu, nvidia gpu) i still sometimes run into some compatibility and general IT issues when i need to install some plugins, extension, additional Ai software, train models and so on. Does macOS suffer from these issues because it might be neglected by the devs or is it smooth sailing? :D
And exactly which apple device are you using? I was thinking of using Mac Studio at work because of the insane amount of shared memory
Hey - thank you so much for the kind words! Honestly, I think the Mac is worlds behind for stable diffusion models compared to a dedicated GPU. I started this channel on a MBP M1 and only 16GB RAM and quickly was hitting productivity issues when it came to testing.
For 512x512 images and a little patience, it is definitely fine. SDXL resulted in memory errors. Batch processing was making it quite hot and a lengthy process. Any sort of AnimationDiff testing also became too long of a process.
That made me try to find a creative solution that wasn't a cloud-based service. Not that I don't think they have their place (they absolutely do), but I just don't like seeing my money visually drain as I am just researching, testing, documenting, and sharing my results with everyone here.
That led me to get an old PC and put in a RTX 3060. I access it remotely over the LAN and allows me to have much better performance than I could ever get with the MBP while still enjoying the Mac apps that complement my workflow. I just start ComfyUI/WebUI with the --listen flag as it gives me a port to remotely connect to.
So I say all that as I am not sure if the Mac is the best platform at this time when it comes to SD models. My understanding is that even with Metal, the bandwidth of the memory isn't nearly as quick as PCIe. However, the text models like Mixtral8x7b may offer better performance per dollar compared to a GPU (assuming enough memory is available).
I like to share this spreadsheet with others as I think its the most telling (docs.getgrist.com/3mjouqRSdkBY/sdperformance) its user reported speeds of stable diffusion based on various platforms, GPUs, etc.
Thanks for the valuable info! @@PromptingPixels
Macs are by far the best where they excel and they are absolutely left behind in many areas. If you chose specific apps that cater to their neural engine, which generally requires converting the models to a different format, then they can actually run circles around a PC for Stable Diffusion image generation. Because the Neural engine acts like a separate set of cores, you get your ram plus the 16 cores set aside for machine learning applications. This means a 16 GB acts like 32 for those models. And the Vram on the unified architecture of the new chips counts the total Ram because it’s shared. So a 16 Gb Apple Silicon has 16 GB Vram, minus overhead for running applications. Where it falls behind is anything Cuda specific.
Helpful, what is your preferred inpainting model these days?
Really loving soft inpainting as the results are more seamless. The checkpoint i use depends on the image being retouched (photographs use a realistic checkpoint, illustrations typically use a general or anime-based checkpoint, etc.)
Is it possible to get a link to the inpainting checkpoint model used? (Dreamshaper Inpainting 8). Thank you for the demonstation
Sure thing - here ya go: civitai.com/models/4384?modelVersionId=131004
Thank you Lots @@PromptingPixels
Great video.
Thank you!
I have done the same as you but on the VAE Encode for inpainting node it gives me the error "The parameter is incorrect"
Thank you man!
outpainting works better in automatic (less artefacts and border line) isn't it?)
Generally yes - I think auto1111 is better for modifying images than comfy. At some point will do a proper comparison here on the channel. But made the video for those folks who would rather keep everything contained in one interface.
Thank you bro!
Where to get that inpainting model ?
civitai.com/models/4384?modelVersionId=131004
Sorry? But title says outpainting, but you showed in painting example no?
Example in the video is using the 'pad image for outpainting' node. I found that the inpainting checkpint (which is trained on partial images) performed better than the generation checkpoint. Always best to test for your use case on what provides the best results.
Thank You!
You're welcome!