Mastering InPainting with Stable Diffusion & Auto 1111 Forge: Advanced Techniques for Perfect Scenes
HTML-код
- Опубликовано: 19 фев 2024
- 💼 Xerophayze Google Share: share.xerophayze.com
🛒 XeroGen lite online prompt forge: shop.xerophayze.com/xerogenlite
Dive deeper into the art of in-painting with Stable Diffusion and the innovative Automatic 1111 Forge in our latest video tutorial. Whether you're looking to add characters, objects, or intricate details into your scenes, this guide is packed with advanced techniques, tips, and tricks to help you achieve the precise results you desire.
From the basics of integrating new elements seamlessly to employing sophisticated strategies for more complex compositions, this tutorial covers everything you need to refine your AI-generated images. Perfect for artists at any level seeking to enhance their digital creations with added depth and realism.
Join us as we explore the full potential of in-painting with Stable Diffusion and Auto 1111 Forge, ensuring your artistic visions come to life exactly as you imagine. Don't forget to like, share, and subscribe for more expert insights into AI art creation.
🔗 Links:
🛒 Shop Arcane Shadows: shop.xerophayze.com
🔔 Subscribe to our RUclips channel: video.xerophayze.com
🌐 Explore our portfolio: portfolio.xerophayze.com
📱 Follow us on Facebook: www.xerophayze.com
check out these other videos on inpainting:
• Inpainting Tutorial - ...
• How to Inpaint in Stab...
• How to change ANYTHING...
• Inpaint whole picture,... - Хобби
The first video I found that explains about the soft inpainting feature!
Glad I could help
amazing mate
Thank you! Cheers!
Thanks for the videos, you're one of the best teachers on SD I found so far
Wow, thanks!
thanks for the video!
My pleasure!
Umm, have a link to that lightning fast checkpoint you were using? It looks incredible. You're generating 4 images fast than I can make 1 lol. And the quality looks insane! The SDXL-turbo one.
There's a couple of them that I use but the one you're probably talking about is called realities edge XL lightning turbo V7. It can be found on civitai here.
civitai.com/models/129666/realities-edge-xl-lightning-turbo
Hi, Thank you for these amazing tutorials. I've just got Automatic1111 Forge installed and am in the process of learning how to use it better.
I've been using the regular Automatic1111 to make desktop background images, along with images for a family quiz I play once a month with my aunts and uncles.
I think when you did the initial upscale of your image, it changed as you forgot to also copy the negative prompt.
Anytime you do an upscale there are going to be minor changes. Maybe I forgot to turn the denoid strength down.
Awesome video. I really appreciate the longer and more in-depth content! I'll be going though many more videos on your channel after this one :)
If you don't mind, can you elaborate on how to change the path/link to an external drive for the checkpoints? I tried the linking method and had all the options *except* for "hardlink". I tried a different method of changing the user batch file and got launch errors lol
Nice. Glad you like them
Sorry I completely missed that last question you asked. If you're using Windows, look up the command MKLINK and how to use it. That's what I did so that I could link up my original Forge insulation model folder with the original model folder I use. Although I don't know if that matters at this point because stability Matrix
Since im using my own model merge for a specific style, I don't really have a fitting inpainting model.
Generally inpainting works okay with it, except if i'm trying to add stuff in.
Aside from finding a relatively close style inpaint model, do you have other suggestions for my issue?
Just use your model to do the end painting, use a fairly high mask blur, try to adjust the only masked area with and height so that whatever is generated is centered in that area. Once you've added the objects, you may have some seams that you'll need to remove. You can use an actual in painting model on just the seams, or send the entire image over to image to image, and using your merge, and a low denoise strength, rerender the entire image. It may change various things on the image, so you may need to play with what denoise level works for you.
wont the image change slightly when using the refiner as you are doing fewer steps (75%) with the first model and then switching to a different model?
Very little and that's the point with the refiner is your are trying to add detail so it has to change the image a bit.
woow First time I see this method
Hope it helps
Please tell me why you just wrote "fog cabin in the forest" but the result is a cabin in a crystal ball
Sorry if I said fog, it was supposed to be log cabin in Forest. But if you notice I have selected under my styles one of my styles that takes in my input and inserts it into a prompt. If you go to share.xerophayze.com you can access my Google share and I have my styles.csv file there. This will give you access to the prompt that is set up to do this.
Hola amigo
Hola como estas
@@AIchemywithXerophayze-jt1gg espero tu video me ayude a mejorar mi in-painting
@@jeellyzdo you mean translate to Spanish?