woa! That's a lot of extension to install just for inpainting; the more extensions you try to install, the more likely you are risking breaking you automatic1111.
Yeah I decided to take a different route. I actually developed image generation and in painting capabilities to my online platform XeroGen. Took a lot of the work out of getting that stuff set up for people. Just load your image up and started inpainting.
Hi, Thank you for these amazing tutorials. I've just got Automatic1111 Forge installed and am in the process of learning how to use it better. I've been using the regular Automatic1111 to make desktop background images, along with images for a family quiz I play once a month with my aunts and uncles. I think when you did the initial upscale of your image, it changed as you forgot to also copy the negative prompt.
Awesome video. I really appreciate the longer and more in-depth content! I'll be going though many more videos on your channel after this one :) If you don't mind, can you elaborate on how to change the path/link to an external drive for the checkpoints? I tried the linking method and had all the options *except* for "hardlink". I tried a different method of changing the user batch file and got launch errors lol
Sorry I completely missed that last question you asked. If you're using Windows, look up the command MKLINK and how to use it. That's what I did so that I could link up my original Forge insulation model folder with the original model folder I use. Although I don't know if that matters at this point because stability Matrix
Umm, have a link to that lightning fast checkpoint you were using? It looks incredible. You're generating 4 images fast than I can make 1 lol. And the quality looks insane! The SDXL-turbo one.
There's a couple of them that I use but the one you're probably talking about is called realities edge XL lightning turbo V7. It can be found on civitai here. civitai.com/models/129666/realities-edge-xl-lightning-turbo
Since im using my own model merge for a specific style, I don't really have a fitting inpainting model. Generally inpainting works okay with it, except if i'm trying to add stuff in. Aside from finding a relatively close style inpaint model, do you have other suggestions for my issue?
Just use your model to do the end painting, use a fairly high mask blur, try to adjust the only masked area with and height so that whatever is generated is centered in that area. Once you've added the objects, you may have some seams that you'll need to remove. You can use an actual in painting model on just the seams, or send the entire image over to image to image, and using your merge, and a low denoise strength, rerender the entire image. It may change various things on the image, so you may need to play with what denoise level works for you.
Sorry if I said fog, it was supposed to be log cabin in Forest. But if you notice I have selected under my styles one of my styles that takes in my input and inserts it into a prompt. If you go to share.xerophayze.com you can access my Google share and I have my styles.csv file there. This will give you access to the prompt that is set up to do this.
The first video I found that explains about the soft inpainting feature!
Glad I could help
Yep, great lesson! Soft Inpainting is at 12:45
Thanks for the videos, you're one of the best teachers on SD I found so far
Wow, thanks!
woa! That's a lot of extension to install just for inpainting; the more extensions you try to install, the more likely you are risking breaking you automatic1111.
Yeah I decided to take a different route. I actually developed image generation and in painting capabilities to my online platform XeroGen. Took a lot of the work out of getting that stuff set up for people. Just load your image up and started inpainting.
@@AIchemywithXerophayze-jt1gg Cant wait to try it. Thanks.
thanks for the video!
My pleasure!
Hi, Thank you for these amazing tutorials. I've just got Automatic1111 Forge installed and am in the process of learning how to use it better.
I've been using the regular Automatic1111 to make desktop background images, along with images for a family quiz I play once a month with my aunts and uncles.
I think when you did the initial upscale of your image, it changed as you forgot to also copy the negative prompt.
Anytime you do an upscale there are going to be minor changes. Maybe I forgot to turn the denoid strength down.
woow First time I see this method
Hope it helps
amazing mate
Thank you! Cheers!
Awesome video. I really appreciate the longer and more in-depth content! I'll be going though many more videos on your channel after this one :)
If you don't mind, can you elaborate on how to change the path/link to an external drive for the checkpoints? I tried the linking method and had all the options *except* for "hardlink". I tried a different method of changing the user batch file and got launch errors lol
Nice. Glad you like them
Sorry I completely missed that last question you asked. If you're using Windows, look up the command MKLINK and how to use it. That's what I did so that I could link up my original Forge insulation model folder with the original model folder I use. Although I don't know if that matters at this point because stability Matrix
Umm, have a link to that lightning fast checkpoint you were using? It looks incredible. You're generating 4 images fast than I can make 1 lol. And the quality looks insane! The SDXL-turbo one.
There's a couple of them that I use but the one you're probably talking about is called realities edge XL lightning turbo V7. It can be found on civitai here.
civitai.com/models/129666/realities-edge-xl-lightning-turbo
Since im using my own model merge for a specific style, I don't really have a fitting inpainting model.
Generally inpainting works okay with it, except if i'm trying to add stuff in.
Aside from finding a relatively close style inpaint model, do you have other suggestions for my issue?
Just use your model to do the end painting, use a fairly high mask blur, try to adjust the only masked area with and height so that whatever is generated is centered in that area. Once you've added the objects, you may have some seams that you'll need to remove. You can use an actual in painting model on just the seams, or send the entire image over to image to image, and using your merge, and a low denoise strength, rerender the entire image. It may change various things on the image, so you may need to play with what denoise level works for you.
wont the image change slightly when using the refiner as you are doing fewer steps (75%) with the first model and then switching to a different model?
Very little and that's the point with the refiner is your are trying to add detail so it has to change the image a bit.
Please tell me why you just wrote "fog cabin in the forest" but the result is a cabin in a crystal ball
Sorry if I said fog, it was supposed to be log cabin in Forest. But if you notice I have selected under my styles one of my styles that takes in my input and inserts it into a prompt. If you go to share.xerophayze.com you can access my Google share and I have my styles.csv file there. This will give you access to the prompt that is set up to do this.
Hola amigo
Hola como estas
@@AIchemywithXerophayze-jt1gg espero tu video me ayude a mejorar mi in-painting
@@jeellyzdo you mean translate to Spanish?