Resize and fill is useful, but you need a fairly high denoise so the AI can fill in the area better than just copy each line, also it is better to scale up say 50px and then 50px instead of trying to upscale 100px. What you can do achieve is "outpainting" !
Interesting! I used a denoiser strength of 0.5 in the example for that one to try and stick close to the original, but a higher denoiser makes sense for this function! Good tip 👍
@@InciteAI Hi there friend, sorry to bother you, but do you have, or know of, any tutorials that would tell me a newcomer and complete amateur at this, how to learn to use animateddiff just to animate the sky on a given image. I want to keep a building the same, no animation, and just have a moving sky. I know it has to do with masking but I have gone through a bunch of tutorials for all different extentions via A1111 most of which are either broken now or the models are missing to be able to use them.
This! I forgot how many videos I saw that didn't make any sense or didn't explain any details. Can't wait for my nvidia to arrive and start playing with it. Great tutorial!
Awesome explanations! I've only been playing with SD for a week, and have been struggling with sorting this out.....no longer! Quick concise explanation with a brief visual demo. thank you!
Great tutorial! Love your style of explaining things so clearly. Will be tuning in for more. I’ve been messing around with SD for a while, and there’s always more to learn…
You're welcome! Not sure if there is a good way to do that in one go using ControlNet, you'd probably need to a workflow like CN/OpenPose to Img2Img or Inpaint or even Roop. I was thinking about doing a deep dive on ControlNet in an upcoming video, so I'll definitely explore this more!
@@BasicSneedEducation its moved to settings, you can make it appear in user interface. on quicksettings list type "face_restoration" and apply and reload UI
7:50 Just 1 note here about inpaint sketch + de-noise: In inpaint SKETCH, you give a base with what you draw. So you usually don't want to bump up the de-noise too high when using "Original" mode, or SD might veer off of what you intended with your drawn thing too much. The limit is dependent on the image ofc. But 0.8 is usually too much for me in that mode.
This video covered "Inpaint Masked" as far as how it functions, but it didn't say why someone would want to use a feature that seems inferior. Well, when you "Inpaint Whole Picture", because it has to put the whole image into VRAM, you might not be able to handle how much memory it uses. Inpaint Masked shrinks its view to a small portion of the image for its inspiration, and that means it puts a smaller image into VRAM, and for output. You likely only want to use Inpaint Masked if you're running out of VRAM, or if there's something in the image that it keeps using as inspiration that you don't want it to use, and that part is far away from what you're inpainting.
Thank you very much for the tutorial. i immediately subscribed to the channel. There will be an opportunity in the future for video related to "Generative Fill" as in photoshop. Of course if it is possible in Stable Diffusion? Thank you again and best regards.
This is a great video! I had this all working and then I upgraded and went XL and now I am having issues. The big thing that I am finding is the Templates that I use are black and white outlines with large black space backgrounds. I have my mask to retain the black space backgrounds. No matter what setting I use, it is altering my background. Do you have any suggestions?
I was hoping to learn how to do what you did in the thumbnail. Any time I inpaint smaller areas, it just about never gives me any changes even with 1.0 denoise on latent noise. Adding blemishes, piercings, or cool cyberpunk designs on parts of the body seems so frigging impossible.
The thumbnail image was made through multiple iterations of what I show in the video: Inpaint the Hair > Change Background with Inpaint Upload > Back to img2img then tweaked the prompt to remaster the whole image. I was lucky enough to get the facial detail in the new img2img generation with the tweaked prompt!
Okay making progress! so im using a preexisting image for a pose i found on images and if i set denoising strength to over 0.2 it will start deviating heavily so i cant touch that denoise setting at all i will get cats and cows again! hmmm........ quantum physics realm i will figure it out eventually.
Even at a higher denoiser strength the results should be on theme with your prompt. If you aren't prompting for cats and cows you shouldn't get them! It's very strange! Is this happening with all chkpt models or are you just using one? Are you using any Loras? How detailed is your prompt?
No matter what I do, I can't seem to get any of the prompt to affect the original image- it almost acts like it's just trying to "redo" the original image itself- no new changes from the prompts. I'm new to Automatic1111, but I'm familiar with the basics of SD. Any tips or solutions? Thanks!
My first thought is to check your denoiser strength, you may need to bump it up. Otherwise you could try reloading the base model, restarting A1111 itself, or disabling all extensions and restarting may also help.
Say I have an image of myself just standing there. Can I use the image2image tool to make that image of myself, change my pose and fight a ninja? As an example? Or would I have to use another tool to make something like that happen? Asking because it looks like image2image you keep the same pose and add to it. But Im wondering like if I change the prompts while using the image2image if it would make that drastic of a change
I'm running into this error when trying to use Inpaint Sketch: ValueError: operands could not be broadcast together with shapes. Anyone had this before and know of a fix? (this video is a great help btw!)
Even doing what you are doing its not working and ive been at it for weeks! i want to cry and just give up! At least im good in the computer hardware department ! i better leave my hand of sutff i dont understand! so frustrated, i cant anymore. When you guys get a bit of variation form your original im getting cats and dogs! xD what the hell is going on! i just want the original image!
Hey There! So are you getting completely different results from your prompt? Does this happen in Text2IMG as well or just img2img? If you're following along with the video and you're getting really random results, there could be something wrong with your A1111 install. Another thing to try us to have your CPU generate the seed for you rather than your GPU (GPU still handles the the generation, this just affects the noise that's created. Go to settings and look for Random number generator source, then set to CPU. Hope you figure it out!
Thanks for the reply! I found that i had to turn down the noise option it was at 8 or something now i can actually copy an image to use as a template.Also i guess it never helps when you are generally darn tired and exhausted. Im gonna lay of it for a while and come back to it all fresh! @@InciteAI
Resize and fill is useful, but you need a fairly high denoise so the AI can fill in the area better than just copy each line, also it is better to scale up say 50px and then 50px instead of trying to upscale 100px. What you can do achieve is "outpainting" !
Interesting! I used a denoiser strength of 0.5 in the example for that one to try and stick close to the original, but a higher denoiser makes sense for this function! Good tip 👍
@@InciteAI Hi there friend, sorry to bother you, but do you have, or know of, any tutorials that would tell me a newcomer and complete amateur at this, how to learn to use animateddiff just to animate the sky on a given image. I want to keep a building the same, no animation, and just have a moving sky. I know it has to do with masking but I have gone through a bunch of tutorials for all different extentions via A1111 most of which are either broken now or the models are missing to be able to use them.
It is an absolute breath of fresh air to find someone who understands this well, and can present it like they understand it. Subscribed.
Wow thanks for the feedback, much appreciated 😀
This! I forgot how many videos I saw that didn't make any sense or didn't explain any details. Can't wait for my nvidia to arrive and start playing with it. Great tutorial!
An excellent tutorial. Informative and presented in an easy-to-understand way. Best img2img and inpainting explanation I've heard. Thanks.
Thank You! Glad you liked it 😀
Very useful, well structured and to-the-point. Thank you for your work!
10 months later, but great video!!! easy and quick explanation, you helped me a lot, please keep it up.
Awesome explanations! I've only been playing with SD for a week, and have been struggling with sorting this out.....no longer! Quick concise explanation with a brief visual demo. thank you!
Hey there, glad it helped!
Thank you so much. Just only watching your video for the first time that I know what is Stable Diffusion
Great tutorial! Love your style of explaining things so clearly. Will be tuning in for more. I’ve been messing around with SD for a while, and there’s always more to learn…
Thanks! And thanks for the feedback, much appreciated!
Quality, concise, thorough explanation.
Thank you for this!
Thanks man, short but helfpul, that's how I like it.
No problem! Glad it was helpful!
Very good explanation! Please do more.
wow thank you for this tutorial i wondering if theres a way thats same image but different pose using control net
You're welcome! Not sure if there is a good way to do that in one go using ControlNet, you'd probably need to a workflow like CN/OpenPose to Img2Img or Inpaint or even Roop. I was thinking about doing a deep dive on ControlNet in an upcoming video, so I'll definitely explore this more!
thank you i wish you can upload a toturial that image 2 imag2 but different pose @@InciteAI
Looking forward to seeing a video from you on ControlNet.
This video was tremendously informative and sincerely appreciated!
@@InciteAIhi! Did you happen to create a video about this topic? Was wondering if I can add a pose and a hand/arms to an existing image.
strange i dont see Restore Faces at 3:37
Same here. Not sure how to get it.
@@BasicSneedEducation its moved to settings, you can make it appear in user interface. on quicksettings list type "face_restoration" and apply and reload UI
Great tutorial, great channel!
Thank you! Cheers!
Fantastic tutorial.
Thank you brother! You saved the day hahah woww brilliant stuff
No problem 👍 Glad you liked it!
This taught me so much, I really appreciate it. I only have one question.
Do the colours used in inpaint sketch make any difference?
7:50 Just 1 note here about inpaint sketch + de-noise:
In inpaint SKETCH, you give a base with what you draw. So you usually don't want to bump up the de-noise too high when using "Original" mode,
or SD might veer off of what you intended with your drawn thing too much.
The limit is dependent on the image ofc. But 0.8 is usually too much for me in that mode.
Thanks for the tutorial.
What a great tutorial! BIG FAT FANX!
Great tutorial, no doubt ❤
Very good tutorial. Please go back to this.
This video covered "Inpaint Masked" as far as how it functions, but it didn't say why someone would want to use a feature that seems inferior. Well, when you "Inpaint Whole Picture", because it has to put the whole image into VRAM, you might not be able to handle how much memory it uses. Inpaint Masked shrinks its view to a small portion of the image for its inspiration, and that means it puts a smaller image into VRAM, and for output. You likely only want to use Inpaint Masked if you're running out of VRAM, or if there's something in the image that it keeps using as inspiration that you don't want it to use, and that part is far away from what you're inpainting.
Great Info! Thanks for sharing!
perfect guide thank you
Good video no messing about, if you already played with SD this helps with the other img2img tabs not used right away.
Awesome!
Thank you very much for the tutorial. i immediately subscribed to the channel. There will be an opportunity in the future for video related to "Generative Fill" as in photoshop. Of course if it is possible in Stable Diffusion? Thank you again and best regards.
You're welcome, and thanks for the feedback! I'll definitely be doing something on Outpainting in the near future!
Just amazing. I'll stay :D
Thank you, much appreciated! :D
Gracias, me suscribo. Excelente video.
¡Gracias! :D
This is exactly what I need
Thank you
very clear tutorial, thank you
Thanks for the feedback! Glad it was helpful!
ty for this. very useful!
mind blowing video! thank you! are all these options available in comfyui?
Yep the basics are available as comfy nodes as well, though it's a little more complicated than A1111 😅
Please keep making videos !
Thanks bro now i get it!
This is a great video! I had this all working and then I upgraded and went XL and now I am having issues. The big thing that I am finding is the Templates that I use are black and white outlines with large black space backgrounds. I have my mask to retain the black space backgrounds. No matter what setting I use, it is altering my background. Do you have any suggestions?
You got a new sub! I hope you upload more vids :(
i don't have restore faces in my sd-webui img2img pannel
its moved to settings, you can make it appear in user interface. on quicksettings list type "face_restoration" and apply and reload UI
I was hoping to learn how to do what you did in the thumbnail. Any time I inpaint smaller areas, it just about never gives me any changes even with 1.0 denoise on latent noise. Adding blemishes, piercings, or cool cyberpunk designs on parts of the body seems so frigging impossible.
The thumbnail image was made through multiple iterations of what I show in the video: Inpaint the Hair > Change Background with Inpaint Upload > Back to img2img then tweaked the prompt to remaster the whole image. I was lucky enough to get the facial detail in the new img2img generation with the tweaked prompt!
KING
How do you add that restore faces checkbox? No one mentions it in tutorials.
its moved to settings, you can make it appear in user interface. on quicksettings list type "face_restoration" and apply and reload UI
Hello, my inpaint do ping color on the edit there is a solution to clean that ?
this is only working with generated photos? because when I choose a photo of mine it creates crazy stuff but not the image
Okay making progress! so im using a preexisting image for a pose i found on images and if i set denoising strength to over 0.2 it will start deviating heavily so i cant touch that denoise setting at all i will get cats and cows again! hmmm........ quantum physics realm i will figure it out eventually.
Even at a higher denoiser strength the results should be on theme with your prompt. If you aren't prompting for cats and cows you shouldn't get them! It's very strange! Is this happening with all chkpt models or are you just using one? Are you using any Loras? How detailed is your prompt?
Thankyou
You're welcome, I hope it was helpful!
No matter what I do, I can't seem to get any of the prompt to affect the original image- it almost acts like it's just trying to "redo" the original image itself- no new changes from the prompts. I'm new to Automatic1111, but I'm familiar with the basics of SD. Any tips or solutions? Thanks!
My first thought is to check your denoiser strength, you may need to bump it up. Otherwise you could try reloading the base model, restarting A1111 itself, or disabling all extensions and restarting may also help.
what is the batch tab?
Say I have an image of myself just standing there. Can I use the image2image tool to make that image of myself, change my pose and fight a ninja? As an example?
Or would I have to use another tool to make something like that happen? Asking because it looks like image2image you keep the same pose and add to it. But Im wondering like if I change the prompts while using the image2image if it would make that drastic of a change
You'll probably want to check out control net for that ruclips.net/video/YgWv4zLZ1rk/видео.htmlsi=yMzzoFndr7tRnphD
wow, your generations are so fast, mine takes up minutes! is that a graphics card limitation? i have a 1660 super
I'm running a 3070 so it's fairly swift, but I also speed up some of the generations in post for the sake of the video!
I'm running into this error when trying to use Inpaint Sketch: ValueError: operands could not be broadcast together with shapes. Anyone had this before and know of a fix? (this video is a great help btw!)
hey, do you have any idea, why I have no edit icon(top righ corner next to X, I only have X) when I send my picture ti img2img? PLS help.
But if I use a real scanned picture to use the img2img.... what processes should I do to NOT have too many changes to the original image?
denoise strength is there to either make a huge or small difference to an image. So I would set that in the 0.2-0.5 range
How to transform tattoo images to image
img2img sketch give me lagging. Anyone know why this happening?
Even doing what you are doing its not working and ive been at it for weeks! i want to cry and just give up! At least im good in the computer hardware department ! i better leave my hand of sutff i dont understand! so frustrated, i cant anymore. When you guys get a bit of variation form your original im getting cats and dogs! xD what the hell is going on! i just want the original image!
Hey There! So are you getting completely different results from your prompt? Does this happen in Text2IMG as well or just img2img? If you're following along with the video and you're getting really random results, there could be something wrong with your A1111 install. Another thing to try us to have your CPU generate the seed for you rather than your GPU (GPU still handles the the generation, this just affects the noise that's created. Go to settings and look for Random number generator source, then set to CPU. Hope you figure it out!
Thanks for the reply! I found that i had to turn down the noise option it was at 8 or something now i can actually copy an image to use as a template.Also i guess it never helps when you are generally darn tired and exhausted. Im gonna lay of it for a while and come back to it all fresh! @@InciteAI
❤
🥰
and yet nothing about controlnet
😈 *Promo sm*
instead of changing the prompt to "pink and blue hair" could you just change it to Democrat?
Bot
@@joeabascal2341 It's Mr. Bot to you
Lmao nah that shit was funny!!
*rimshot*
No but you can change it a Cheeto wrapped in a Chinese made American flag and inspire millions of rubes to vote for a con man.
THANKS
You're welcome!