Hello .. first of all thank you for the video and all the other ones.. helping me a lot.. BUT.. i have a weird question to ask, Controlnet tab text goes green when you enabled (at 1:54).. and i loved it.. can you please tell me how you did that? Mine is just same enabled or not.
I'm sorry for being off topic, but my controlnet is stuck at version 1.1.200, it says up to date whenever I try check for updates, please help, thank you
Does not work! I am doing it exact like you discribed, but the result has nothing to common with the picture loaded in controlnet... where is my mistake?
I love your dad joke intros so much. You have a great personality and are a great presenter for these complex concept tutorials. Really appreciate your work!
Where do you get the model "control_v11p_sd15_inpaint [ebff9138]" from? I have the inpaint_only but not the model, do you have a good source for these models.... Great video as this has been a major issue for my design work.
I was wondering the same thing. I tried pressing alt and drag and ctrl and drag etc but I still can't figure it out. I tried Google search but nothing.
Hi Sebastian your tutorials are simple and easy to understand which is what it makes it great. In this tutorial my Image was not getting good results when moving from 512 to 1024 ( Vertically and then horizontally) so i tried adding a step by going 512 to 768 and then 1024.......it worked well
I mostly use openoutpaint. I dont see it mentioned often but it works great! Same as photoshop. Just leave the prompt blank and let SD do the magic. I think an inpaint model works best, but its not required.
Both options have their pro‘s and con’s. You will need a impainting model for consistent inpainting with openoutpaint, controlnet can’t be used and scripts can be used but not in a userfriendly way, controlnet inpainting processor allows you to use any model, you can also use other controlnet models simultaneously for more control over the output. But the blank canvas in openoutpaint is just op. You’ve got the queues, resizeability, infinite canvas, shortcuts,… it sucks that openoutpaint isn’t being well maintained bc if it would include more standard auto1111 features and controlnet it would instantly become the one extension to rule them all Except for controlnet of course.
Is there a way to do this in one direction only? Let's say if I want to make a body to a head, can one focus on just the lower region or does it always have to be split 50/50 with up/down ?
video topic suggestion: improving old stills and 8mm films, scanned images have artifacts, maybe colouring B&W pictures, what can I do today with instruct pix2pix or controlnet to improve their perceived quality, but not adding fingers or other hallucinations? Also, how about old scanned films? I have a bunch of old 8mm footage, scanned to 1080P, but even when in focus, the details are low resolution. how tricky is it to fix with today's FOSS tools?
This might seem like a trivial question but I cannot find a way to drag an image from the "main generation box" to the control net box. I always have to go in the control net box and then search for the image in the folder. If I click on the image in the "main generation box" I immediately get a larger view of the picture and it does not let me drag it.
Is there a way to outpaint in only a single direction? So lets say you wanted to expand the sky above the castle but didn't want it to fill in anything below it like the city/water that was generated.
Hey Sebastian I have a request. Can you inpaint two seperate images from between. For example two images on the side but middle is just black. To fill in between sort of like a panorama. Would be of great help if the inpaint options can also be shown.
@@puyakhalili Inpainting uses base colors. What I want is to fill black areas where there is nothing for AI to reference like for example extending two images in between or lets say there is a large black border on the top. I experimented in the past few days since I didnt get an answer and it was resolved for me by using the fill option. Fill uses info for reference in the entire image and does not rely on the existing underlayer image.
Can someone explain how to install the inpaint model? I have CN but not this particular model, on hugging face the formats at .py and .safetensor but the local models folder is .yaml
can you do a more in detail inpainting guide? I watched the control net guide but it isn't cutting it for me. Need more control over what's getting changed and how
hey! with the new SDXL models the whole workflow is broken. i can see that there are new models for the control net for canny, depth, pose, but not inpaint. do you know anything about that?
@@brianjdillon What? ^^U The algorithms (even the ones executed by bots) are made by humans, and we were just making jokes about dad jokes, why should anybody feel empty? Dude, be happy, and let other people be happy too!
I don't know why but it doesn't work for me, i had generated an image using a custom model from civit but when i follow the steps you've mentioned using the same prompt but a random seed and doubling the width or height, i just get borders of a single color instead of an outpainted image. Is there anything I'm doing wrong? Or something i have to change?
@@SRagy yeah i saw Olivio Sarikas video, followed his instructions an it works pretty well. still not perfect, but none of the outpainting techniques are perfect, unless we manually draw what we wnt to see in
Wasn't there an extension, I think it had "Canvas" in the name, that would let you select the space around the image and have the AI fill in wherever there wasn't anything in before?
You rock. You are the best ive watched so far beside an amazing Indian, and or, Arabic man that i will, if not subscribe. Ive seen at least 15 other channels of guys i just cant follow and they skip shi#. You dont….. thank you so much!
I don't know if the developer they had changed it , today I try it , the inpaint would overwrite my original picture with generated picture, instead of expand it, it worked fine like 3 days ago,
What's the difference between outpainting with controlnet in txt2img and outpainting with img2img. Both do the same thing in the end. Why use one rather than the other?
I dont know why but area where Im trying to add something has a little bit changed color. Therefore I can see initial rectangle in center and outpainted area with a little bit different color ((
It is so weird. When I use this method, the newly expanded region has like a frosted glass look. It is less colorful and sharp than the original image. The original image is dark, has good contrast and is sharp. I can get it slightly better by selecting "ControlNet is more important" but the dull area is still clearly visible. I tried different sampling methods, different sampling steps, different control weights, and the problem remains.
Brother, Everyone is talking about using stable diffusion like this , creating character , upscale, etc . Now the real question is, how to use stable diffusion in our real image like changing an outfit, pose, and another mind blowing things.
Why does my controlnet dont have these inpaint preprocessor options? Only has only the "global harmonious". And my menu also doesn't have this option to mark each model, how do I do that?
the cons with this "outpainting", the base picture always at the center, is there any way to make change the position, like we want to expand the left / right side only? because we can do that in Generative Fill PS Beta
For some reason trying this method is just generating completely new images for me, all same settings (except for the appropriate resolution for my image). Inpainting with this controlnet model works as it should.
What is the advnatage and disdvantage of using Inpaint+Lama in the controller drop down instead? I see some videos using this one instead of the plain inpaint in that selection box.
Hello all, i'm a newbie and just try this stable diffusion for the 2nd day. i follow this step by step but everytime i click generate it always told me need to clear some vram. I use rtx3070ti, is it not good enough? or am i doing something wrong? Thank you...
the style is just a preset prompt/negative prompt. You have to write both on the respective textboy and then click the floppy disk button (save style) and give it a name. When you then select it it will be as if you added the preset prompt to your actual prompt
Awesome tip, thanks! Outpainting sucked so bad before lol One minor thing....I'm not able to drag any images I generate back into controlnet. When I try it just opens the preview and instead I have to go into the output folder and drag it from there. Is there a setting I'm missing or something?
Hi Sebastian, thanks for your tutorials! Can you tell me why my generated image looks completely different from the original image? I can't seem to get it to outpaint even though I followed your steps. Thanks!
Prompt styles here:
www.patreon.com/posts/sebs-hilis-79649068
Hi seb, Do u know why my art generation stuck at 95% for +10 seconds? my GPU is 3090, it was pretty fast before :\
@@pkqqq0:23
Hello .. first of all thank you for the video and all the other ones.. helping me a lot..
BUT.. i have a weird question to ask, Controlnet tab text goes green when you enabled (at 1:54).. and i loved it.. can you please tell me how you did that? Mine is just same enabled or not.
I'm sorry for being off topic, but my controlnet is stuck at version 1.1.200, it says up to date whenever I try check for updates, please help, thank you
Both of you need to update first a1111 and then ControlNet
Thanks for answering my question before I asked it about the work flow and making two steps for vertical then horizontal inpainting.
Does not work! I am doing it exact like you discribed, but the result has nothing to common with the picture loaded in controlnet... where is my mistake?
man I love your guides so much, I am new in this kind of stuff but learning fast by watching your videos over and over xD
I love your videos, they help so much and are very informative. The dad jokes are the best. Thank you for your channel.
You are so welcome! Thank you for the kind words 😊
@@sebastiankamph very helpful, thank you!
@@isamarsh You bet! Thanks!
can some one please help:
under the controllnet when i select inpainting only no models load for inpainting. why does this happen?
I had no idea ControlNet Inpainting worked with text2img. The way you showed everything was incredibly convenient and helpful. Thanks!
Thank you Sebastian, yet again another great tutorial.
My pleasure!
I love your dad joke intros so much. You have a great personality and are a great presenter for these complex concept tutorials. Really appreciate your work!
Thanks so much! Very kind of you, it's greatly appreciated 😊
Where do you get the model "control_v11p_sd15_inpaint [ebff9138]" from? I have the inpaint_only but not the model, do you have a good source for these models.... Great video as this has been a major issue for my design work.
If you found the solution to this, please let me know :)
If you also found the solution, please let me know :) @@ChaosFollowing
same here
Did anyone ever figure this out? I am running into this issue currently.
How can you drag the Image from Preview to ControlNet? If I try this, the image pops up larger.
I was wondering the same thing. I tried pressing alt and drag and ctrl and drag etc but I still can't figure it out. I tried Google search but nothing.
You always make me chuckle with your dad jokes, please never change XD
And also, great info as always, thanks a lot! :D
Glad to hear it! I'll keep them coming 😄
Hi Sebastian your tutorials are simple and easy to understand which is what it makes it great.
Glad you think so! Good to have you aboard Werner as always 😊🌟
Thank you! I noticed that new inpainting model in controlnet and it didn't make sense... until I watched this tutorial. 😊👍
FINALLY, SOMEONE SHOWS HOW TO USE CONTROL NET TO OUTPAINT.
I funnily tried just about everything, except for resize and fill. -_-
Glad you finally got it working then! 😉
Hi Sebastian your tutorials are simple and easy to understand which is what it makes it great. In this tutorial my Image was not getting good results when moving from 512 to 1024 ( Vertically and then horizontally) so i tried adding a step by going 512 to 768 and then 1024.......it worked well
The best outpainting tutorial I've seen so far!
Thank you, Sebastian! You rock🎖🎖
ye it works really well and is easy to use. No mask nothing just put in the size and done.
Sebastian is the best at making SD tutorials 👍
Ganta
I tried this on 2 art pieces and it was fantastic, thanks for sharing this!
I mostly use openoutpaint. I dont see it mentioned often but it works great! Same as photoshop. Just leave the prompt blank and let SD do the magic. I think an inpaint model works best, but its not required.
what is it? where can i use it?
@@ywueeee its an automatic1111 extension
Both options have their pro‘s and con’s. You will need a impainting model for consistent inpainting with openoutpaint, controlnet can’t be used and scripts can be used but not in a userfriendly way, controlnet inpainting processor allows you to use any model, you can also use other controlnet models simultaneously for more control over the output. But the blank canvas in openoutpaint is just op. You’ve got the queues, resizeability, infinite canvas, shortcuts,… it sucks that openoutpaint isn’t being well maintained bc if it would include more standard auto1111 features and controlnet it would instantly become the one extension to rule them all
Except for controlnet of course.
Another fantastic video! Thank you! 🌟
Glad you enjoyed it, you superstar you. Hope you're having a great day 💯
Hi! Love your videos. Where do you get the model "control_v11p_sd15_inpaint [ebff9138]" from?
Outstanding Outpainting! -Hope it doesn't rain! Because that might be outrageous for some. Thanks for your video!
Do you outsource your jokes? 🤣
If you have trouble getting it to fill the blank space with content, in control mode set to "controlnet is more important".
Is there a way to do this in one direction only? Let's say if I want to make a body to a head, can one focus on just the lower region or does it always have to be split 50/50 with up/down ?
Not with the current extensions.
For those with low VRAM is there any option? I cant stretch over 760/760 becouse I run out of memory 😅😅😅
Thank you for the information.
video topic suggestion: improving old stills and 8mm films, scanned images have artifacts, maybe colouring B&W pictures, what can I do today with instruct pix2pix or controlnet to improve their perceived quality, but not adding fingers or other hallucinations?
Also, how about old scanned films? I have a bunch of old 8mm footage, scanned to 1080P, but even when in focus, the details are low resolution. how tricky is it to fix with today's FOSS tools?
Thanks a lot for this method with CN! Very interesting! ❤
Beautiful mate, that was superb!!
Good innit? 💥
This might seem like a trivial question but I cannot find a way to drag an image from the "main generation box" to the control net box. I always have to go in the control net box and then search for the image in the folder. If I click on the image in the "main generation box" I immediately get a larger view of the picture and it does not let me drag it.
Is there a way to outpaint in only a single direction? So lets say you wanted to expand the sky above the castle but didn't want it to fill in anything below it like the city/water that was generated.
yes. poor man's outpainting script.
is there a way to inpaint only in certain directions? perhaps i missed something.
I love controlnet! Great vid
Best gift to AI art for sure
Thank you. Definitely learned something today.
Which is better Outpainting with 1111 or Adobe?
SD (a1111) has a lot more control. Adobe's version is very limited. UI/UX obviously better in Photoshop though.
Thanks for everything, your channel is very helpful for beginners.
This was perfect, thank you.. my adhd hardly held in there.. but we good thank you.. :D
Hey Sebastian I have a request. Can you inpaint two seperate images from between. For example two images on the side but middle is just black. To fill in between sort of like a panorama. Would be of great help if the inpaint options can also be shown.
That would just be in-painting no?
@@puyakhalili Inpainting uses base colors. What I want is to fill black areas where there is nothing for AI to reference like for example extending two images in between or lets say there is a large black border on the top. I experimented in the past few days since I didnt get an answer and it was resolved for me by using the fill option. Fill uses info for reference in the entire image and does not rely on the existing underlayer image.
@@ruuuuudooooolph I think if you don't have any content to begin with, then you want to fill using "Latent Fill" in the in-painting.
Great tutorial as always. BTW, I tried to inpaint a stormy sky, but I blue it.
Boom, nice! 😊💫
@@sebastiankamph Sebastian is spreading his dad's joke virus😂
Nice. Gotta get that Stable Diffusion Photopea extension bro!
Can someone explain how to install the inpaint model? I have CN but not this particular model, on hugging face the formats at .py and .safetensor but the local models folder is .yaml
Wow, just had to redo a few pictures because of the new possible extra space. omg!
can you do a more in detail inpainting guide? I watched the control net guide but it isn't cutting it for me. Need more control over what's getting changed and how
Check my full inpainting guide, blue-ish thumbnail.
Everytime you tell a joke, it makes me laugh, but this one made me fell of my chair
Haha, glad to hear it 😁
I always wanted to do this, thank you so much
Good stuff as per usual!😊
hey! with the new SDXL models the whole workflow is broken. i can see that there are new models for the control net for canny, depth, pose, but not inpaint. do you know anything about that?
i got error: ValueError: ControlNet failed to use VAE.
Another great tutorial! Thank you
Glad you liked it!
Why did Stable Diffusion and ControlNet go to the party together? Because they knew they would make quite a picture!
Hah 😊
That's the kind of comment that fits right into this channel's algorithm. I love it! XD
Quite a scene
@@brianjdillon What? ^^U
The algorithms (even the ones executed by bots) are made by humans, and we were just making jokes about dad jokes, why should anybody feel empty? Dude, be happy, and let other people be happy too!
Love your content my friend, thank you for sharing
Thanks so much! 😊
THANK YOU!!!! what if I only wanted to expand the image to the right or to the left or up or down?
THX again Seb, much appreciated :)
can I use this method to repair trimmed part of an image (trimmed hair, trimmed shoulder, etc...)?
I don't know why but it doesn't work for me, i had generated an image using a custom model from civit but when i follow the steps you've mentioned using the same prompt but a random seed and doubling the width or height, i just get borders of a single color instead of an outpainted image. Is there anything I'm doing wrong? Or something i have to change?
Another comment here suggested to use the 'controlnet is more important' setting.
@@SRagy I'll try it out thanks :)
@@Dhruv1223 Also try increasing denoising.
@@SRagy yeah i saw Olivio Sarikas video, followed his instructions an it works pretty well. still not perfect, but none of the outpainting techniques are perfect, unless we manually draw what we wnt to see in
my outpainted areas are coming out darker than the images. is it a vae problem? i have tried all the vaes i have
Wasn't there an extension, I think it had "Canvas" in the name, that would let you select the space around the image and have the AI fill in wherever there wasn't anything in before?
You rock. You are the best ive watched so far beside an amazing Indian, and or, Arabic man that i will, if not subscribe. Ive seen at least 15 other channels of guys i just cant follow and they skip shi#. You dont….. thank you so much!
Thank you kindly! Loving the username btw 🌟
I almost spit out the coffee with the joke! Great video!
😁 Glad you enjoyed it!
I don't have an inpaint model. Where should I find one and where to put?
Hello ! The resize mode with inpaint controlnet is grey !!! Any suggestion please
I don't know if the developer they had changed it , today I try it , the inpaint would overwrite my original picture with generated picture, instead of expand it, it worked fine like 3 days ago,
What's the difference between outpainting with controlnet in txt2img and outpainting with img2img. Both do the same thing in the end. Why use one rather than the other?
What if I just want to expand to de right side and not both side wide?
hi ,where can i get the "preprocessor" and "model" ?, all is empty
I dont know why but area where Im trying to add something has a little bit changed color. Therefore I can see initial rectangle in center and outpainted area with a little bit different color ((
Can I use it with a pciture that wasnt made in Stable D ?
Why you don't have the Denoise slider in the txt2img tab?
Keeps changing in different versions
It is so weird. When I use this method, the newly expanded region has like a frosted glass look. It is less colorful and sharp than the original image. The original image is dark, has good contrast and is sharp. I can get it slightly better by selecting "ControlNet is more important" but the dull area is still clearly visible. I tried different sampling methods, different sampling steps, different control weights, and the problem remains.
how to get unit 0, unit 1 in controlnet ?
When i use it, i get the original image generated too. what im doing wrong?
i can't drag the img to controlnet like you. text2image generate image, need to drag to control net inpainting module
Brother, Everyone is talking about using stable diffusion like this , creating character , upscale, etc . Now the real question is, how to use stable diffusion in our real image like changing an outfit, pose, and another mind blowing things.
where can it get that controlnet inpainting model?
Another great helpful video! BIG FAT FANX!
Why does my controlnet dont have these inpaint preprocessor options? Only has only the "global harmonious". And my menu also doesn't have this option to mark each model, how do I do that?
You should update your controlnet
the cons with this "outpainting", the base picture always at the center, is there any way to make change the position, like we want to expand the left / right side only? because we can do that in Generative Fill PS Beta
When I try, only the "None" option appears by "Model". (no "control_v11p...")
Make sure you download the models
@@sebastiankamph Where can we download it?
How do i use controlnet models with sdxl?
For some reason trying this method is just generating completely new images for me, all same settings (except for the appropriate resolution for my image). Inpainting with this controlnet model works as it should.
same here, I'm getting these weird robots when my input is very different from it
What is the advnatage and disdvantage of using Inpaint+Lama in the controller drop down instead? I see some videos using this one instead of the plain inpaint in that selection box.
Wow, didn't realize ControlNet now has its own inpainting. Is there still use for A1111's native inpainting then?
have you tried the new run diff models? Fx pr and 2.5?
also, whats green and sings? a salalalalaaad
Thanks! BTW @0:23 - You forgot to link your videos you point at this point in the video
is there a sdxl version?
Awesome!
Hello all, i'm a newbie and just try this stable diffusion for the 2nd day.
i follow this step by step but everytime i click generate it always told me need to clear some vram. I use rtx3070ti, is it not good enough? or am i doing something wrong?
Thank you...
Don't have resize to and resize by options in img2img and it doesnt work. Anyone had idea why? Tried to git pull but nothing changed
ControlNet v1.1.224 "3 units" why i have 3 units? how can i delite this?
Could you tell me how to show the “style button” on the right corner in stablediffusion ui?
the style is just a preset prompt/negative prompt. You have to write both on the respective textboy and then click the floppy disk button (save style) and give it a name. When you then select it it will be as if you added the preset prompt to your actual prompt
Anyone know why I keep getting borders? Sometimes colored, sometimes black.
does this work for SDXL models? thanks for the post!
great one
😊😊🥰
Awesome tip, thanks! Outpainting sucked so bad before lol One minor thing....I'm not able to drag any images I generate back into controlnet. When I try it just opens the preview and instead I have to go into the output folder and drag it from there. Is there a setting I'm missing or something?
Maybe try it in a different browser?
Teah
You are like the Bob Ross of AI art
Hi Sebastian, thanks for your tutorials! Can you tell me why my generated image looks completely different from the original image? I can't seem to get it to outpaint even though I followed your steps. Thanks!
Could be you forgot to click "Enable" in ControlNet? I did this and I just kept generating new images using the same prompt.
great video. Thanks!! although i keep getting messy results when using an inpainting ckpt. deliberate in my case
Supper helpful!! is it work with videos??
I heard you could add a webui Photopea extension cand u make a quick tutorial?
also where do i get the model control model
why dont i have 2 units tab on my control net extension?
Settings -> ControlNet