NEW ControlNet for Stable diffusion RELEASED! THIS IS MIND BLOWING!
HTML-код
- Опубликовано: 27 май 2024
- ControlNet can transfer any pose or composition. In this ControlNet tutorial for Stable diffusion I'll guide you through installing ControlNet and how to use it. ControlNet is a neural network structure to control Stable diffusion models by adding extra conditions.
Open cmd, type in: pip install opencv-python
Extension: github.com/Mikubill/sd-webui-...
Updated 1.1 models: huggingface.co/lllyasviel/Con...
1.0 Models from video (old): huggingface.co/lllyasviel/Con...
FREE Prompt styles here:
/ sebs-hilis-79649068
How to install Stable diffusion - ULTIMATE guide:
• Stable diffusion tutor...
Chat with me in our community discord: / discord
Support me on Patreon to get access to unique perks!
/ sebastiankamph
The Rise of AI Art: A Creative Revolution
• The Rise of AI Art - A...
7 Secrets to writing with ChatGPT (Don't tell your boss!)
• 7 Secrets in ChatGPT (...
Ultimate Animation guide in Stable diffusion
• Stable diffusion anima...
Dreambooth tutorial for Stable diffusion
• Dreambooth tutorial fo...
5 tricks you're not using
• Top 5 Stable diffusion...
Avoid these 7 mistakes
• Don't make these 7 mis...
How to ChatGPT. ChatGPT explained:
• How to ChatGPT? Chat G...
How to fix live render preview:
• Stable diffusion gui m...
Download Prompt styles: www.patreon.com/posts/sebs-hilis-79649068
Please support me on Patreon for early access videos. It will also help me keep creating these guides: www.patreon.com/sebastiankamph
This is the reason that its so important that Stable Diffusion is open source.
I mean its cool yeah, but doesnt it steal art from Artist that way?
it's*
@@losttoothbrush
Open source just means people can access the source code and therefore add to the tool.
Being open source is not directly contributing to the "stealing" issue. Although indirectly it can make it more accessable.
In the end it's a tool and I'd argue what you make with it may be transformative work or not.
People "artists" cling to their prompts like their lives depend on it.
Asking them to share is like squeazing blood from a stone.
@@losttoothbrush Well, yknow, if we are gonna steal art, at least make it public and for everyone instead for one big corpo having the goods, hell yea brotha
man you are incredible! so good and simple, i installed stable diffusion with one of your videos, and now im ready to install control net. i am officially your fan!! thanks for everything!! greetings from corfu greece
This is probably the most useful thing for SD. Thanks for showing us!
Thank you, this is really helpful. My "pencil sketch of a ballerina" had three arms and no head, but eventually I generated something usable. It's all absolutely fascinating and it's been fun to learn over the past week or so.
Glad it was helpful! And we've all struggled with the correct amount of body parts 😅
As a Drawing Teacher of having 33 years of experience of teaching school kids how to draw and paint, One thing for sure....AI can not replace human creativity but I must say this will surely help so many people with poor drawing skill to unleash their creative thoughts and imagination! which for a teacher like me gives immense hope of revolution in Arts!
Thanks for such an easy and helpful tutorial on this topic!
@@ClanBez Same, I see possibility to work on multiple projects as a designer. Tedious parts of process are getting automated. Super excited keep exploring!! Will get more time for vacation, well I hope!🤞 PS: In my area, high school art teacher is referred to as Drawing teacher and College art teachers are referred to as Art teachers. Yeah it's little weird.
honestly refreshing to see some people be so open minded to this. AI art is often viewed as a job killer but i mean honestly speaking look at so many incidents from the past. When digital art first started i'm sure millions of artists who worked hard with paint, and pencils and ink and every other form of real life art, felt threatened by it.
Why pay a guy to paint a logo for you, when you can use a paint tool? Among so many other stuff.
But look what happened now, digital art is so common now because its quicker, cheaper and more flexible. If you made a mistake in a real life painting, you didn't have an undo button or an eraser.
Just like digital art gave so many new individuals a chance to make art, so too is ai, its all on how you use it
People feel threatened because a lot of artists still lives from comissions (Btw, they aren't wrong for doing that, it's "easy money"). When you're a teacher in an art school it's easy for you to not be threatened by AI art.
Don't get me wrong, I'm not here to sound mad or anything, I'm just saying the truth. I agree AI art will revolutionize the way we think about creativity and I also think it won't destroy art (At least not completely), people will still have their community of non AI art. But it's undeniable, AI art has tons of legal issues and the AI is pretty bad right now. Very rarely I wasn't able to spot if an art was AI or not.
yeah, but it can sure enhance what skill you have yet to acquire or lack the talent for
@@viquietentakelliebe2561 How can u enhance a skill ur not practising? drawing a squiggle then letting ai complete the work based off actual artist's work isn't YOUR imagination or skill and u still learn nothing. ur not doing any of the work the ai is
Really cool. Things are evolving pretty fast! Thanks
Right? This is moving extremely fast. I'm hyped for what's more to come! 🌟
Another good easy to follow tutorial, thanks Seb 👍
This is gold, and Im talking about your video, dude. Really well explained, very detailed, thanks a lot!
Why thank you for the kind words, that's really thoughtful of you 😊🌟
This looks amazing. My drive is full but I definitely want to play more with this.
Throw away the other models and get this, it's fantastic! If you only have space for one, get the canny model.
@@sebastiankamph I'm going to get a new hd after work today. 2tb or so. My stable diffusion folder is 500gb.
I'm also a little nervous since I have an AMD card I'm not sure if this will work on the CPU, but I'm working on building a new computer soon.
Brah, your camera is so nice..... Love to see the commitment to your craft. Keep it up fam
I messed with this already... seems like the first step to something amazing!
Set this up yesterday its pretty amazing
The pose algorithm is EXACTLY what I've been looking for. Thanks for this video!
Hopefully I'll manage to install it. Last time I tried to use extensions, Stable Diffusion just refused it and I had to reinstall everything, lol.
EDOT : Ok, I installed it, and it works! Sadly, the Open Pose model seems... capricious. It often doesn't give me any skull. The Depth Map works wondefully though.
That is really awesome :D Gonna try the scribble! I've been having horrible varied results of deformed humans and I was getting sick of it. Haven't touched SD since. Now this changes! :D
This is the second best thing right after Ikea Köttbullar
🐴 🍖
great video on controlnet man, thanks a lot !!
Glad you liked it!
Installing controlNet !!!! eeeeeek great tutorial so much fun!
Have fun!
Super helpful content man, thank you for making it.
My pleasure! Glad you enjoyed it 😊
Thanks for this well put together tutorial on how to get it going!
This is kinda what i was hoping for, turning my b&w line art into ai generated images =D, lotsa scribbles here i come!
Controlnet is insane. Thanks for the examples
You bet!
I had Pingu vibes at the end, this is quite an amazing update.
If you want to use the source image as ControlNet image, you don't have to load the ControlNet image separately (it will automatically pick the source image when no image is selected). Saves some time. 🙂
I wonder why img2img is used at all since ControlNet is meant to do the job now instead of the old img2img algorithm, right?
@@Naundob ControlNet can create from something whereas img2img can create from nothing.
@@superresistant8041 Interesting, isn't img2img meant to create a new image from an image instead from nothing?
Please please please finish these arguments... I don't understand what you both talking about hahaahahah. And give conclusion please. Thanksss
@@Naundob img2img gives you way less control, basically you pick "denoising strength" which at 0.5 basically tells AI "this is a 50% done txt2img image, half way between random noise and desired result, continue working on it until the end" so you have to look for golden middle between your image not changing at all and changing way too much. Controlnet can be used both in txt2img and img2img and it has many powerful features like drawing very accurate poses, keeping lineart intact and turning simple scribbles into actual art (where with normal img2img you'd end up with either an ugly result or one that doesn't resemble the doodle almost at all)
You have teached me so much, thank you very much!
Glad to hear that!
This is absolutely amazing! Thank you so much!! s2
Thank you for the kind words 😊
Since I've been playing with ControlNet I am in a constant state of awe and disbelief😮 Truly game changing. What I really like is the possibility of rendering higher resolution images with that much control. Does anyone have a tip on applying a certain color scheme when using ControlNet? Probably something we have to wait for until the next SD revolution hits. So roughly 5 days.. (me making sounds of pure excitement and slight fatigue at the same time).
Hah, I totally feel you. I'm hyped for every new update, and then I look at the list of all the videos I want to do.
Try using the base picture in the img2img for the colors and tone you want, use a de-noising strength of like 70+,
(it can be of a completely unrelated subject and different aspect ratio)
Then set the text prompt to the subject you want. Additionally you can set the base control net image, to the pose and subject your looking for.
This is creating a relatively new image however, not color grading an existing one, still, it is an interesting way to control the general vibe and keep consistent colors between renders.
@@sebastiankamph SEBASTIAN GREAT CHANNLE AND CONTENT, i hacve a doubt this extention work with stable difusion 1.5 models?
@@sergiogonzalez2611 Works with all models, majority of my testing have been on 1.5.
I'm having trouble to get it to work. I'm lost. I tried for example to scribble a poorly drawn dog, prompted "A photorealistic dog"(With openpose, canny, depth) and the only time I got a photorealistic dog was when it outputed a black img, otherwise It just spits out a 3D image of my scribble. Hope that made sense.
Thanks for explaining this.
I feel silly, but I hadn't tried this yet because I dont have 50 gigabytes of free drive space. It didn't occur to me that I could just install part of them. This is truly amazing stuff, I'm looking forward to seeing how animations look with this tool.
...WOW! ...the next growth spurt of SD...people say AI makes us stupid but i haven't learned so much since AI crashed into my life...Big FANX for keeping us up to date!
So much new information entering our heads 😅 Thanks for the support! 🌟
AI does and will make people stupid, in the sense they don’t need to learn anything themselves they just ask an AI to do it for them. You are learning because you are interested in it and it is new, once it becomes more prevalent it will most likely stop being open source and people will just be interested in the results, not how it works.
I agree with many things and I think that children should not have access to generative AIs until a certain age ((16?). However, I have no idea how to remove open source software from millions of private PCs (?).
My biggest concern is that the AIs will greatly increase the general smartphone addiction.
(I don't have one myself and don't want one either).
But: I love "painting" and filming in VR... and thanks to the new AIs, I now have the potential of an entire animation studio at my own disposal.... BTW:
The absolute nightmare are AIs that develop weapons, toxins, etc. as well as the AI-based mind-reading technology that is already pushing onto the markets...
Thanks for sharing your experience! I'd kind of given up on SD because my computer is way too slow (5-10min to generate a 512x512 Euler a image) but when I came back to the community last week, everyone was creaming their panties over Controlnet and I had no idea why. Thanks to your explanation, now I kind of understand but I guess I'll have to try it myself some day once I can afford a better computer.
I feel you! But yeah, ControlNet is WILD!
got it working, great video.
The audio is SUPER👌👍
amazing video, thanks!
I had difficulty cutting through the jargon. thanks man.
Glad I could help 😊
Thanks Seb ! you are my Obiwan Kenobi of ai !
Thank you as always my friend! Your supportive attitude is a national treasure 🌟
Very helpful.. Thank you!
I'm convinced the future of IA generated picture will be with a mix of 3D models. Like you do a precise pose in 3D and apply stable diffusion on it so that it can have precise informations about depth in the scene and that will achieve true photorealistic render.
You can do that already with ControlNet
controlnet is king from what I can tell.. so far
Great video thank you brother!
Be so much better when somebody actually puts a proper UI on all of this.
ooohhh someone that explain things like should be done. ty
Thank you, that's very kind 😊🌟
I'm trying to find a way to have SD include character accessories accurately and consistently. Like having a character holding a Gameboy, or some other specific device. Would love to see a video breaking down how to train SD on specific objects, and then how to include those objects in a scene.
another awesome video. Thanks!
Glad you enjoyed it! 🌟
thanks alot. only works with 1.5 tough. but i found out, so all good:)
How did you do it?
This is really a Game Changing feature!!!
Sure is! 🤩💥💫
Thank you for this mate
Happy to help! 🌟
For the Openpose, is there a way to get the coordinates of the joints in the pose?
For storyboarding this is insane.
another great video!
Glad you enjoyed it!
This is an AMAZINGLY useful tool. Another big step for A.I art.
Couldn't agree more! Real game changer 🌟🌟🌟
Is the preprocessor always has to match the controlnet model? I was using it with mostly no preprocessor selected and it seems to still work? I thought it's only an optional thing which allows you to create an additional pass.
Does stable diffusion rely on metadeta created when it generates the sketch or the original image to generate the reposed image? I'm wondering because I think it would be interesting to upload hand drawn sketches for the pose sketch and have stable diffusion redraw an image based on that.
How challenging would it be to add your own training data (not sure if correct term) that this stack would use?
Let's say that I would get too much of certain style, but in case I would like to do something totally different.
amazing thanks
Fantastic! thanks for the tutorial! let's play!
Have fun! Good to see you again Gerard 💫
This is nuts! 🤯
I couldn't agree more 🌟
If you lower the weight to zero it will cost you and arm and a leg. Brilliant! Thanks for Your Video! Definitely Highly Valuable Content.
Sabastian, I get this error when I tried typing pip install opencv-python 'pip' is not recognized as an internal or external command, operable program or batch file. Any idea what is wrong?
Pretty awesome! 😍 Now I’d like to know if there’s a way to apply these poses to our own custom characters, instead of just random characters. 🤔
Is it possible to pose two of our original characters together?
Also, it’s nice that we can copy the pose, but can we also copy facial expressions into our characters?
Yes and yes! 🌟 It might be a little tricky to get exactly what you're looking for though, but it is possible. I would inpaint each character separately to get the original features.
HI! Very useful video i got intrigued but how to do it all in Google Colab especially the first steps in "Command Prompt" or cnd, is it possible?
This is truly mind-blowing. Thank you for sharing. What version of Stable Diffusion are you using. 1.5 or 2?
Both! Your Stable diffusion program is not version dependant. It's the actual model .ckpt or .safetensors file that has a version. 1.5 is great for illustrations, while 2.1 does a great job with photorealistic portraits.
Any clue why the controlnet models takes a while to load for me ? I've had the same issue with safetensors models
Thank you for the tutorial - I am not getting the two images when I generate from ControlNet - just the one.
After being so disappointed with Pose, I had much better results with Depth. Thanks!
Great to hear!
Thanks another well done video. Annoying, are those two dropdowns really needed? Seems like preprocessor type and model go hand in hand? Or is it some UX decision made by extension author?
Thanks! Honestly, I couldn't say. It's still too early, let's see as people explore the tool more how it ends up.
How did you get the drawing canvas ?
Super useful tutorial. I have one question, my stable diffusion does not show me Scribble mode next to enable, i have invert input color, rgb to bgr, low vram and guess mode, why is that?
is there a way to clone a object or a person with the background with Inpaint? what would be the prompt ? Ty
Hello Sebastian Kamph,
I really like your channel and the way how you talk and make those very comprehensive videos. I learn a lot from you and I thank you very much for that. Pls never change the way of your videos (calm, stabil, precise).
Of course I have a question. I am concerned about the pickle files from Illyasviel. Does pickle mean, that it can harm your PC? If yes, what safetensor files can be the alternative?
thank you very much and have a nice day.
Best Regards
Hey! Thank you! Safetensors are pickle-free and safe, yes. But official files from lllyasviel are safe
What stable diffusion checkpoint do you recommend? Does it change anything picking a different one apart from the first image generation?
Amazing video! Got everything up and running
I've been playing a lot with Dreamshaper and variants of Protogen lately, but there are a lot of good ones out there.
Hi, thank you for this, I'm very interested but I can't download your prompt styles, any help ?
When I open the pre-processor tab there is a long list of processors to choose from, also processors I have not installed (manually). For instance, there are 3 scribble processors: scribble_hed, _pidinet and _xdog - which one to choose? It is also hard to invert the sketch from black to white
how does it handle larger images? I played a bit with version 1.6 and I got a lot of out of v-memory exceptions for thing like 1000x800 pixels. and I have 12GB of visual ram.
Thank you. If we are running it on Colab Notebook with WebUI enabled, can we paste the models in Google Drive's Models folder instead of the WebUI folder and then just paste the path into the Notebook?
Not OP but yes, you can copy/paste the models into your folder on your Google Drive but make sure you paste them to the Models folder in the Extensions parent folder and Stable Diffusion’s base models folder.
@@SilasGrieves Thank you
Thank U
this is cool
Thanks for the explanation! Just asking , the checkpoint that you got there, is it self made? Or can I get it from somewhere? If I use the v2-1_768-ema-pruned.ckpt, I get this error "RuntimeError: mat1 and mat2 shapes cannot be multiplied (154x1024 and 768x320)". Any idea?
I get the same... any ideas ?
Check Civitai for models. I recommend finetuned 1.5 models.
@@sebastiankamph yup I figured this was because I used 2.1 models, 1.5 works !
Thanks 👍
You're welcome 🌟
Is it possible to get multiple poses in one image, like two or more figures interacting?
Or would one do the figures individually and try to inpaint the others into the same scene?
Yeah, I think Controlnet is a great way to have multiple people in the image. Take a photo or sketch them. SD is not great at multiple faces though, but can inpaint that if needed.
It can do multiple people. I saw someone show an example where there were four people in the image.
Could you help me adding Control Net to the Deforum extension? Thank you
Awesome !
Thanks Adriaan! Good to hear from you again 😊🌟
My question is: Can you give SD a character in the img to img tab and use ControlNet to pose them, thus having a near identical character from the img to img one, just in a different pose?
I would like to know the answer to this too
Hej, I am interested in car body design and I need to produce orthogonal views of a vehicle (front, side, rear and top). Do you know if there is any Stable Diffusion extension that allows me to generate these views/images based on a car render I already have? My idea is to use these four views as a blueprint to make the 3D CAD model in Solidworks. Thank you!
ty!!
I haven’t been able to get the model to deviate like in your thumbnail. How did you manage to lose the skirt in one photo but get a flowing dress in another? Photoshopping the image fist?
These are not shopped at all, just prompt and settings changed inside SD. You can finetune with both denoising strength and ControlNet weight 🌟
@@sebastiankamph I thought you might need to tweak the input images. I'm watching your other workflow videos now and it's been very helpful to see how you can tweak things. Thank you for all these videos!
is there any docs on these model so i have an idea what I'm dldling ? -- sorry if that's a dumb q , I'm SUPER new to all of this :)
awesome
How can I use an alpha of an image to use it for create a new different image? Thx
What GPU do you have? I noticed you generate stuff way faster than I'm able to.
Thanks for the tutorial btw
RTX 3080. You're welcome!
what kind of specs are you using for your computer? and how long does it take to generate a controlnet image?
RTX 3080. Depends on settings. 5-20s
Are you running on an old version of A1111? I don't have buttons for the sampling methods. That changed to a drop down long ago. Didn't it? 🤔🤔
Yes! I've kept various stable releases and stopped auto-updating since I had it break far too often.
controlnet is amazing.. still trying to figure out the HED model
Thanks.
You're always welcome 🌟
Links to images from video preview?? I really like lighting from the first two.
what do i do if my canvas won't show any marks even after inverting the preprocessor?
This look so fun 😢😢
Can you use it for batch img2img animations? Or just single image generations
It's possible to use it in batch!
@@sebastiankamph could you get it to work in batch? Mine only makes the first image but has errors of saving when generating past the first image
What is interesting is that I need the opposite, I need that coloring page lineart from the beginning :D LOL
I'll tell you a secret, that was the hardest part when I played around with this! 😅 But you can still use photos as references.
thanks for the tutorial ! However I couldn't find the tab "Open drawing canvas"
Does this have to be on windows? Can it be installed on Mac?