NEW! ControlNet 1.1 No Prompt Inpainting.
HTML-код
- Опубликовано: 1 июн 2023
- The first 1,000 people to use the link will get a 1 month free trial of Skillshare skl.sh/sebastiankamph06231
Let's look at the smart features of ControlNet 1.1 inpainting in Stable diffusion.
Inpainting model here: huggingface.co/lllyasviel/Con...
FREE Prompt styles here:
/ sebs-hilis-79649068
Support me on Patreon to get access to unique perks! / sebastiankamph
Chat with me in our community discord: / discord
My Weekly AI Art Challenges • Let's AI Paint - Weekl...
My Stable diffusion workflow to Perfect Images • Revealing my Workflow ...
ControlNet tutorial and install guide • NEW ControlNet for Sta...
Famous Scenes Remade by ControlNet AI • Famous Scenes Remade b...
LIVE Pose in Stable Diffusion • LIVE Pose in Stable Di...
Control Lights in Stable Diffusion • Control Light in AI Im...
Ultimate Stable diffusion guide • Stable diffusion tutor...
Inpainting Tutorial - Stable Diffusion • Inpainting Tutorial - ...
The Rise of AI Art: A Creative Revolution • The Rise of AI Art - A...
7 Secrets to writing with ChatGPT (Don't tell your boss!) • 7 Secrets in ChatGPT (...
Ultimate Animation guide in Stable diffusion • Stable diffusion anima...
Dreambooth tutorial for Stable diffusion • Dreambooth tutorial fo...
5 tricks you're not using in Stable diffusion • Top 5 Stable diffusion...
Avoid these 7 mistakes in Stable diffusion • Don't make these 7 mis...
How to ChatGPT. ChatGPT explained in 1 minute • How to ChatGPT? Chat G...
This is Adobe Firefly. AI For Professionals • This Is Adobe Firefly....
Adobe Firefly Tutorial • Adobe Firefly Tutorial...
ChatGPT Playlist • ChatGPT
Prompt styles here:
www.patreon.com/posts/sebs-hilis-79649068
Fantastic, controlnet really has made stable diffusion glorious! Thanks for the video seb!
You're very welcome my dear friend!
As far as I understand, the point of the inpainting ControlNet is that it works with non-inpainting models. You can take any model and inpaint more efficiently without having to retrain or do a hacky inpaint model merge.
Yup. I was pulling my hair--er, beard--out trying to understand why all my results were desaturated and low contrast. Silly me, I was using inpainting models.
The portraits were bad examples. If all you have is a blurry background, you'll get more blurry background. It needs something to work with. Your last example is the best because the masked area actually had some content to work with and the opposite side of the image also had something for the AI to utilize instead of just more blurry nothingness that the portraits had.
Yeah, until the very end, I kept thinking, "What on earth is the point of this?".
I'm sure there's a place for this, but I honestly don't see this producing results better than what you'd get with normal inpainting when used with a good prompt. Moreover, the addition of the Photopea extension now means we have precise masking control as well as photo bashing when doing inpainting, which is such an amazing combination.
You can use this with a prompt as well and I assure you it is infinitely easier to get the job done with controlnet inpainting. It also works really well wether you use an inpainting model or not. But I don't really use it without a prompt and I tend to go into inpainting either way and use controlnet inpainting there. That allows me to switch between only masked and the full picture depending on the task as well as adjust denoising strength, and you can use much higher denoising strengths. And obviously, you can use those precise masks with this too. If you are in regular inpainting, controlnet will use the mask you set up there as normal.
I mostly use the "global harmonious" inpainting variant.
As an example, I was changing a crossed leg pose to an open leg pose with this. I had to mask out a pretty big area for this which included a big part of the background, and despite using 0.9 denoising strength, controlnet inpainting reproduced all that background faithfully. This was not on an AI generated image. It was on an actual photo.
thanks for mentioning the photopea extension! Thats exactly what I have been missing in my workflow! not needing to go back and forth between PS and SD anymore is huge.
Nice update -- how would you approach outpainting with this?
It's not the most polished stuff like some people said in the comments but im glad they are working on something like this, the less we need adobe the better, also it can become great feature in the future.
Thank you for bringing us news from ai world 😊
Also i know its not Ai related but i would like to know your opinion about affinity if their stuff is good or not or if you prefer them over adobe 😁
Given that it’s literally a single drunk students implementation written and committed to git 8 hours after his post about the feature disparity on Reddit, while it probably took adobe months to copy free open source code to use this already implemented interrogate clip function to generate a prompt if none is given, I think it’s pretty good for the time and effort that went into the new release.
I’d say adobe as a company is pretty fucked with this pace, bc controlnet is already miles ahead of what adobe could ever dream of and if they ever come up with a almost “innovative” feature, there is either already a auto1111 extension for it or it will be added to controlnet in less than a week.
Thank you very helpful, is there any tutorial to upload mask image, instead of drawing it?
Hello, thanks for the interesting video, I have a couple questions about it:
1 Did you compare the inpaint model controlNet and the Deliberate inpanit model? If you did, which one is preferable to use?
2 Do I understand correctly that the inpaint model controlNet is created for models that don't have their own inpaint version?
AI not perfect yet
wow i try to use, thank you for example👍
Came for your guides and updates about stable diffusion, stayed for your Dad jokes/puns 😂
very good results with the protogen models
This is kewl stuff. Your videos are always a firehose of information. Others make vids that are long and drawn out and don't get the the point. Yours I make use of pause and enlarge the screen to see what's going on and follow along.
I'm really glad you're enjoying them, thank you! 😊
I wonder if this can be used for out painting. Extend the image and leave the new area black and see what happens...
Did you've already trying aedetailer extension? It seem smart emough to fix Eyes and hand problems in low resolution or mid shot type render without bother with hires fix
I believe It also use inpainting method but work automatically
how well does it perform now, when using a prompt`?
Can you please help with the case when I want to put on my wall tiles, so that they will be right size and with perfect cells?
Great update!
Hi Sebastian! Thanks for the viedo. How did you add the Control Type section to your Controlnet extension?
that was part of a recent CN update.
@@ADMNtek Thanks!
Seems useful for fixing small details with surrounding context.
Ye just did that removing unwanted stuff works great with it but somehow the inpaint is painfully slow for me. Goes from 5s/it to 36s/it with inpaint model but be a mac issue tho.
The results were... okish? I didn't see anything that justifies that control net "Inpainting Just got Better!" as the title says.... mmmm
Well, I don't see any improvements in controlnet inpaint versus regular inpaint. What's the point in that? And by the way Deliberate model also has inpainting version which is very good.
It’s just a diffuser trained on inpainting, you can use it with any checkpoint , no more need for inpainting models specifically.
Trained/merged inpainting models only have the advantage of freeing you vram bc you don’t need the controlnet
The controlnet inpainting on the other hand safes your tons of hard drive space bc you don’t need to have the inpainting variant for each checkpoint anymore.
Depending on your storage/vram situation each has their own benefits
I see in your UI a calc, which extension is it?
Why would you use deliberate_v2 for inpainting, and not Deliberate-inpainting?
Can it make portrait photo became landscape photo like Generative Fill in Photoshop beta?
I had assumed inpainting did this already but now can see why I struggled with it sometimes.
Can you use depth information out of an image and then do an inpaint? For instance a tattoo on the nose, on the ear, in the neck going up to your cheek? Thanks man!
impaint_only,impaint_only+lama,inpaint_global_harmonious - what are differencies between them?
Thanks for the video😉
Wow, indeed! Love your content!
Thank you, good to have you here! 😊🌟
Very nice!
Fantastic!!!
Where do you take poorly AI image generators? To the hospital's inpaint-ients ward.
Great video. I keep meaning to give inpainting a go but never get around to it. Maybe now I will.
Thank you! at @1:49 your mask excluded the reflection in the water, which is what caused the oddnesses in the images generated
also, at @6:35 you're getting odd generations and clones etc because the txt2img dimensions you have are much larger than the model you are using. I would keep the aspect ratio of 16:9 but have the long edge max 768, then hires.fix or img2img upscale later
You're right! I missed this while recording.
@@sebastiankamph May I suggest a re-edit of this as your videos are super informative and would be a shame this one makes it appear as if this inpaint is useless. Having said that, the little 'mistakes' mention are excellent learning points for everyone including me! Also trying WITH prompts too..
how do you have the controlnet units? is it a separate extension?
automatic1111 settings
What if I remove a logo from an outfit and want to upload an image to replace
Thanks for such a useful video. but I don't have Control Type buttons in my Control net section and control net is in 1.1.150 version !!!! i updated it for 5 times. but no changes. could you me help Mr Kampth ? why is that ? Thanks
If your ControlNet is not updating, you need to update your a1111 by typing git pull in the terminal inside your a1111 directory.
@@sebastiankamph Thanks for advice
Here's a scenario I think might be quite helpful, trying to get rid of an item completely, e.g. say you have a portrait of a woman wearing and she's wearing a wristband and you want to get rid of the wristband. With normal in painiting it can be quite an effort to convince it to get rid of an item and replace it with something that looks realistic. This might work better because it understands the context but doesn't derive anything from the place where you're getting rid of it.
man!, i cant keep up with the updates 😂, but its a nice thing to have. inpainting without any prompts this is wild!
How to zoom in ControlNet inpaint picture ? If I need to change small and accurate details it's impossible. In normal paint there is zoom, but what about ControlNet ?
?
as far as I know, there is no way to do that as of yet.
@@ADMNtek So actually CN inpaint almost non usable :(
I just found that in general this is a crazy good eraser tool
I have "FileNotFoundError: [Errno 2] No such file or directory: 'C:\\New Folder\\tmp\\gradio\\" error, what's can be wrong?
Is there actually a realistic vision inpainting model or is it just a merge of realistic vision and 1.5 inpainting model?
It is 'oficial' realistic vision inpaint model, but it's probably done in the easy way, with a merge.
uhm... so I've been wondering... about that mid journey thing... why the heck would you want to pay to use a discord bot? is it for people who dont have a pc with VRAM?... and can I just rent out my video card?
It's easy and requires no effort or hardware. People will pay for that 😊
Model: * levitating islands, rescaling character cloning, red nose *
RUclipsr: 'Tis actually good. Buy my free prompt styles
Buy? They're 100% free.
there's another cool feature you forgot to show off. By playing around with the resolution, you can also outpaint with this!
but is there a way to define the direction in which the outpainting would go? Changing the height for example does not set whether I want the outpainting above or below, for example.
why my openpose is not working.but canny and other models are working perfect.
Did you know that the masking tools only work right with the Inpaint model? If you try to use it with any other model (to mask out part of your input image, for example), when you draw a mask, it dumps your image and only honors the mask you drew. In other words, if you draw a square, it will make an entirely new image, except where you drew the square. I wrote it up and they said it wasn't a bug... *shrug*. But what if I use the Inpaint model to mask in loopback mode? hmmm...
Now I know why mine did not work at all. It just kept making a completely new image ever single time. What a complete waste of my time...
@@timothymaggenti717 Yeah, I wrote it up as a bug, but they said it was supposed to be that way. *shrug*
Getting an error when trying to use controlnet inpainting with img2img.. will play around with settings, but getting this error: ValueError: Coordinate 'right' is less than 'left'
And you have all the input images in place?
Why didn't you use prompts? Or does it not work with a prompt?
Thanks for asking this, I would like to know as well
it does work with prompts but if u dont use any it will make use of whats outside your mask
7:21 u know there are control net for hands only right?
Did you ever stop singing wonder wall?
Nowhere near as good or as fast as Photoshop. Hopefully it will improve in the future, but for now the best AI inpaint/outpaint is Photoshop Firefly
Why do you increase batch count rather than batch size?
Vram usage
I can no longer keep up. I haven't done anything with the image AI for 2 months. 😄
Interesting video. Thank you. BTW, I was going to make a joke about a broken pencil… but it’s pointless.
😂😂 good to see you again, I missed a joke or two!
Do prompts not work? It's kind of useless to me if they don't.
Prompt will work
Abode has the same problem with hands and feet, mutations galore. Adobe is using stable diffusions software with their generative fill. Anyways, wouldnt using 8k, high resolution, best quality, etc in the positive prompt help with inpainting?
No they arent, its powered by their Firefly.
@@hound_of_justice How is it a coincedence that firefly ai has the same problems with hands and feet the same way Stablediffusion does? and stablediffusion was there from the start. Midjourney uses stable diffusion and practically all ai art websites us it. If firefly was an original code then I wouldnt have the exact same problems as stablediffusion.
I dont understand, how is this controlnet inpainting better than regular inpainting?
It reads the rest of your image. Regular inpainting doesn't understand what the non-painted areas are.
@@sebastiankamph Really? I thought that's what 'whole picture' option does when inpainting? Or does it do something else?
@@MADVILLAIN669 That mainly affects your resolution. ControlNet will read the image better.
@@sebastiankamph I see, thanks for the response!
It seems that ControlNet replaces the Stable diffusion
I couldn't really see anything different, on the contrary, why use luck for that if it can be directed towards a better solution? or were these examples not cool..
Regular inpainting already does that and gives you control. This added feature can work with anything and no prompt, basically giving you something with no effort.
Hahaha this is hilarious. Sebastian, I was expecting some magical results, but you don’t seem too happy about any of your examples you tried in this video. I think I’ll stick to photoshop for my one click in painting fixes. I do love watching your content though. Thank you
I'm so very picky with what I like! I still think this is pretty good tbh 😊 It's not mind blowing like other CN stuff, but pretty good and usable.
@@sebastiankamph How does it do combined with a simple prompt for guidance like you would for normal inpaint?
8:35 Here's a woman with 6 fingers. Should we inpaint the hand? Nah, that's too hard. Let's remove this tree....
I think doing the hand is another inpainting tutorial. Smart inpainting without a prompt is a little limited :)
Am I the only one who's controlnet has been completely broken since 1.1 ? Only canny works
Coffee =)
Mmmm, coffee 😊
Still not as breakthrough as Firefly though, maybe it will be as good as Firefly if we give the ControlNet's developers some time
👋
Good start but needs improvement
maybe you have to use a prompt?
You're doing it wrong. You need to play with control weight between (0.15/0.60) to achieve better results.
interesting, thanks for the comment
Haha I knew this was coming!
Edit: Hi Seb, any multiple characters video coming? I suck at that
Same could be great @Seb
😅 Hmm, I guess. It's not really "news", but could be an ok video nonetheless!
It needs improvement. The results aren’t that good. Also, we need to be able to tell the AI what to create/change on the inpainted areas.
You can still use the old inpainting with prompts. This was just an addition to what was already available.
Depth gave better results for me
This is no longer working
Ah, some lnet so I can f-off and no installation of that is being specified. Nice.
first
🥇🥇🥇
nice video i love your experiments ! but controlnet inpainting is not very good at the moment I think
Uhm. Not impressed.
We can't like everything! Maybe next update 🌟
@@sebastiankamph Well, yes, of course. In a web dominated by likes for years, I think it's important to get back to saying even what we didn't like without offending.
I still didn't leave a negative like.
Yes, it will be better next time. :)
these results are trash bro 😂
Have you *tried* large-subject inpainting without controlnet? It's impossible.
@@jonmichaelgalindo have you tried photoshop generative fill that actually gives quality results? 😏
so it sucks.
seems you didn't do your homework before making this video, bad results
Commercials just too much! Completely ruin video.
you know you can just skip it right? If it helps them pay the bills I think it's totally ok. : ]]
photoshop generative fill is GOD TIER… this is GARBAGE TIER … git gud stable confusion 💪😂
Adobe has the benefit of infinite money. because they you a fortune to use their stuff this, on the other hand, is free.
Why do you people still waste time on all this gimmicky stuff when Adobe introduces a far superior and quicker to use alternative already within Photoshop?
Mainly because this is just one of hundreds of tools available in Stable diffusion. What Photoshop has (so far) is still very limited and with no control except a text box. Using this part of inpainting together with the other variants of regular inpaint will get you a quite the powerful toolbox.
maybe because that software is paid and this is free
try getting around Adobe's hyper-restrictive TOS. You can't even generate an image containing a knife, sometimes. Or try generating something "different" - say, remove a mouth from a picture. It just doesn't work. Too packaged-in.
Also it's Stable Diffusion under the hood, just packaged nicely.
Because fuck Adobe.
photoshops version is WAYYY BETTER bro… this is a dollar store ripoff
Still better using the inpainting tab + controlnet. I dont like the method using in this video, im getting worst results.
Disappointing results though, to be honest.