WOW! NEW ControlNet feature DESTROYS competition!
HTML-код
- Опубликовано: 12 май 2023
- With a new major update to ControlNet for Stable diffusion, Reference only literally changed the game, again.
Prompt styles here:
/ sebs-hilis-79649068
Support me on Patreon to get access to unique perks! / sebastiankamph
Chat with me in our community discord: / discord
My Weekly AI Art Challenges • Let's AI Paint - Weekl...
My Stable diffusion workflow to Perfect Images • Revealing my Workflow ...
ControlNet tutorial and install guide • NEW ControlNet for Sta...
Famous Scenes Remade by ControlNet AI • Famous Scenes Remade b...
LIVE Pose in Stable Diffusion • LIVE Pose in Stable Di...
Control Lights in Stable Diffusion • Control Light in AI Im...
Ultimate Stable diffusion guide • Stable diffusion tutor...
Inpainting Tutorial - Stable Diffusion • Inpainting Tutorial - ...
The Rise of AI Art: A Creative Revolution • The Rise of AI Art - A...
7 Secrets to writing with ChatGPT (Don't tell your boss!) • 7 Secrets in ChatGPT (...
Ultimate Animation guide in Stable diffusion • Stable diffusion anima...
Dreambooth tutorial for Stable diffusion • Dreambooth tutorial fo...
5 tricks you're not using in Stable diffusion • Top 5 Stable diffusion...
Avoid these 7 mistakes in Stable diffusion • Don't make these 7 mis...
How to ChatGPT. ChatGPT explained in 1 minute • How to ChatGPT? Chat G...
This is Adobe Firefly. AI For Professionals • This Is Adobe Firefly....
Adobe Firefly Tutorial • Adobe Firefly Tutorial...
ChatGPT Playlist • ChatGPT
Download Prompt styles: www.patreon.com/posts/sebs-hilis-79649068
Please support me on Patreon for early access videos. It will also help me keep creating these guides: www.patreon.com/sebastiankamph
how do you get two controlnet unit on your gui?
You have to add the styles to the prompt, btw. In the video you just selected them from the dropdown, but they're not added to the prompt until you click on "add style to prompt"
@@UnBknT Not needed to click the button. They are still applied.
Why pay your monthly patreon while I can watch your free youtube videos with adblock on? We were beating the competition I thought no?
@@142vids you're free to do whatever you want. The people supporting me does it from the kindness of their heart, helping me keep doing these videos.
Started playing with it a few hours ago. It is insane. It's nearly as good as training but without the training. It pulls faces, poses, lighting, art style, everything. I cannot believe this is only the first iteration, it is already so good. I thought Shuffle was dope but this is on a whole new level.
Exactly, "almost as good as training" is the scary part. I've been able to get better likeness out of this reference_only model than I've had with pretty much every early training attempt. There's been a bit of cherry picking but in some cases I've gotten 2 extremely good hits from a 4 batch render. It's crazy how good this is already!
@@Mocorn Strange - casue I still cannot get it to create a decent copy of the original face. Always makes the new image look younger and very different from original face
To me it looks like this method only works for people "coming out of the model". For example if you take the seed image from this video and try to generate other images from that without Sebastian's "Digital/Oil Painting" and "Easy Negative" styles, the results are very unimpressive. I'm not saying that this new ControlNet is not super cool for some use cases, but I though he could have been clearer about the limitations.
I couldn't make it work with v1.1.174, txt2img is completely broken even hair colour doesn't match. While img2img kinda works better at least matching hair and clothes but faces are out of a horror movie twisted etc. Im using exactly same styles and settings
how to access free trial?
This has been extremely helpful in redesigning the characters for a video game I did way back in high school. I've taken my art, ran it through Ai, and saw it give me different variations of my work. I'd then pick what I liked from each and draw up the final design, It is such a time saver.
Thank you so much! You've been pretty much the only source I've needed to learn everything I need about control net. Great videos with clear and concise information. Keep it up!
Thank you very much, glad the videos have been helpful to you 😊
this is actually the very definition of game changing
💯
Can create a character and make a whole tv show or anime out of the character
Dude... You're litrally faster than me clicking update button in SD... Have my sub!
Thank you kindly! 😊🌟
The Open Pose 3D extension is great for posing - you can run it in the GUI tab, set the skeleton in three-dimensional space, together with hands and feet and generate 3 images: canny, depth and openpose.
I have only started using Stble Diffusion a bit over a week ago and your videos are such a big help.
Waiting for my new computer with beefy vram to arrive, watching your vids to prep, and I'm loving what I'm seeing! Thanks so much for these!
Wow. CN guys are on a roll. They are innovating faster than OpenAI and Google. Hopefully they can keep up the momentum.
Ha, every day there is a dozen new breakthroughs!
@@AG-ur1lj Thats why the battle for those brilliant mind is not based on ambition but depravation. The big ones will acquire what they can, and the rest will be depraved and obscured. As always.
@@AG-ur1lj powerful how? Will it scale to millions of users? Will it be safe from lawsuits or flexible enough to attract business users? I doubt that. Microsoft or Google could wait and buy anything viable and you, even with your brilliance will have nothing to say. As always in the history.
@@AG-ur1lj you didn't realized that this technology is already paywalled and regulated. You will not profit of it - above certain level of course - because you will not have a resources to train those tools or licenses to use copyrighted source data. As of now, it is not a problem for big corporations, because they just take the best solutions and use them with their data. You probably will be happy, but once more, you will not profit of that. Even if you will be able to train the best of the art algorithm it will be WORSE than their, because they have access to all those data and resources.
@@AG-ur1lj you downloaded terabytes of images and text and all copyrighted books and proprietary magazines from internet? I doubt that. Yet, Google or Microsoft works on that scale. Since you will NEVER have access to data, you will just become a giver of ideas to big corporations with your improvements to "open" algorithms. Without data, those algorithms just dont work. Even algorithms I enclosed in parantheses because when open source community will produce some breakthrough algorithm, big corporations WILL patent some small improvement and you will be barred from using them. That is the reality based on history. I'm amazed of your idealistic view of business.
Sebastian big thanks for providing your styles. I mostly use them right at the beginning before even prompting and they provide beautiful results.
Happy to help!
If you High res fix after the init image is generated, you can usually cut through the noise. Go with R-ESRGAN 4x -> Denoise to 0.3 or 0.2. Keep that part weak. Or, alternatively, you can drag your CFG, and try to use High Res fix to add additional noise and burn if you are going for a Noisey style.
Fantastic info dude, thanks again
You bet! 🌟
Thank you for this news update!
This is fantastic! Thanks so much for the heads up.
Thanks for the video, I love watching how you present it. Keep it up!
Thank you for the support! 😊
Amazing! Thanks you for making these tutorials
Love this!!! I need this. Character consistency is my biggest problem.
Many thanks for sharing tutorials, its a massive time saver ;D
I LOVE THIS FEATURE. Already got some awesome results on first few minutes fooling with it.
Upon seeing this I upgraded to a 12gb gpu this week so I could finally run ControlNet.
It is indeed a literal game changer for projects that need character consistency. No more Lora and prompting gymnastics while crossing your fingers that the next batch will render what you want.
Cuts workflow to a fraction of what is was before and opens all kinds of new creative doors.
I’m loving this feature!
Happy to hear it's working out for you! ControlNet is life.
Is it possible to use this to get a different angle of a specific environment in the same style? No people or characters, just an environment.
Yeah, but it won't be 100%. It's like a better img2img
Sebastian thank you so much for doing what you are doing. I found you today and I have been watching your tutorials all day. I immediately signed up your patron! so glad to find you. I have 2 questions regarding this amazing tutorial. 1. I saw you highlighted prompt, woman smiling, and then ctrl+ Up. could you tell me what it does? is there any resources about those tricks on prompt? 2. would it be possible to combine 2 photos that I like to create new one? Thank you so much again. have a wonderful day!
the ctrl+up with text selected make this text have more weight. the default weight is 1 and is valid for everything inside the parêntesis. the weight takes in consideration all the prompt.
I did get the smile to work, but I had to add my whole prompt so my image didn't change drastically and added the (woman smiling:1.2) at the beginning of my prompt. The Posing part was changing my image too much but I have to play some more with that. In the time you made this video they updated controlnet to v1.1.164. Thanks love your videos!
Glad you're enjoying the videos! I had to test a bunch of stuff before I got it working, and some versions barely even worked for me. Hoping new versions will make it easier to use for all.
This is exactly my experience too.
Also, "ControlNet is more important" brightens up the image for me. I can get more consistent lighting with "My prompt is more important", but that changes the image more.
Im not getting nowhere fast! might just give up alltogether! i mean the output looks nothing, nothing like the input image! and i did everything exactly the same as in the video! ;(
I would love to see how they pulled this off. It seems like if they can do this, then a lot of other things we don't have yet ought to be possible, like maintaining outfits or architecture. This is perfect for making comics, though, with character coherence between frames. Maybe they could even fix the coherency issue of tiling a high res image, depending on what they did, exactly. This is pretty crazy,
You can maintain outfit with it, just promy that outfit or maybe use just outfit here and face in separate controlnet .. you know what gonna check that today
@@wykydytron did you figure out how to do it? I try to use a CN for reference and one for open pose but cant seem to figure out how to get good results
Thank You Sebastian, As ever your Tutorials are informative and straight to the point...And they Work!
Happy to help, thank you for being here! 🌟
This seems like something they could really use to do multi-frame rendering for txt2video
AMAZING!! Fantastic video! Thank you for sharing it!!
Glad you liked it, you superstar, you! 😊🌟
Pog, didn't notice the Update. xd
ty, Seb. Had a good day.
You're welcome! And thank you, you too 🌟
Cant wait to try it, thx!
please raise your volume, I almost had an heart attack when ad kicked in lol
Well, this wasn't quite what I was looking for but holy hell I got something good.
I accidently wiped my prompts and I didn't know how to get them back... loading the image into png brought up my prompts/settings.
So, thank you for that!!!
How would you recommend getting the back of a character? I am trying to grab a depth map from both sides and combine them in blender. I guess I could do head on and then 120deg turns in either direction....
Being able to do my characters in different 3D positions Dang this is godlike
This has more character consistency than many 'old-fashioned' comic books :-)
@@fernando749845 😅😅 this is actually sad to hear
@@MrErick1160 my results are compeltly different than the reference :D :D
🌟🌟
@@fernando749845 yes but actually no.. comic books stay very character consistent unless a panel gets drawn by a different artist
Wonder if people have started building graphic novels with this. Consistency in character design and style between frames is going to be really useful for something like that.
Or video. 😮
U can already get consistent characters with textual inversion or LORA , u can train one yourself, especially textual inversion which needs 8 images , anymore is just useless to train a TI
@@HunterIndia but then you'd need to train a model for each character.. Suppose it's not that tall of an order but still, this'll make things much easier. I should start looking for some webcomics with an AI tag. Would love to see AI being utilized in that space.
that's the dream --> Video indeed, scary how much GPU power would be required
@@Pahiro I'm trying but with Blender and img2img (more fine control).
"I only have my shelf to blame." What a super fine hack joke. I bow, and thanks for the quality info.
This is insanely useful. Like I've been trying for the last week to collect images for a Lora. It can be tricky as hell because keeping characters consistent is HARD. Change just a few words and suddenly the whole piece looks like a different style. It will be SOOO easy to make Lora now thanks to this. What will they come up with next because google and openai in my opinion are doing a pretty "meh" job.
Yeah, this was my first thought too. By itself, it's great, but it can be SO useful for training Loras, which I suppose, are more accurate
Hey can you please tell me how this makes it easier for training a Lora?
@@scottyfityoga Easier to source images of a certain person, for example.
This is what I was waiting for! My goodness
Unfortunately, it doesn't work for me. The generated images all look like the same person but they dont resemble the person in my original image. It's like my image is completely ignored.
Totaly same. Iam getting whole different face..
Do you use mac m processors? Cause i have and there is a bug when it tryies to catch uploaded face.
Wow, that is amazing, great video as always.
Glad you liked it! 😊
Just wow, GAME CHANGER is the right set of words for this... just tried it and am uttelry impressed, thanks for reporting on this!!
Really curious how this could also work with inpainting and img2img at the same time. exciting!
Whenever I try this, it works well EXCEPT I keep getting instances where the body of the person looks like it's covered in sand or other patterns. The face area gets cleaned up during the swap and face fix, but the body just gets completely wrecked. The most recent example is it looks like they got wet then laid down in the sand before standing up to take a picture.
Great video thank you. I have a question, I can make a pose in img2img. When you use a batch of 4 you get 4 pictures and one pose picture. Can i save this pose. Because when i click on the pose image and i use the save button it doesnt work. I don't get a download button as with a normal picture.
Hi ,
very good tutorial .
I tried my own image is input for the controlnet with refrence_only , and a simple promt like "man is smiling" in the face are totally different. how can I preserve the face ?
Thanks
Eran
Thank you for this excellent content!
Happy to help! 😊
when I use controlnet, it only produces an inverted image as a result of the reference even when I select reference as the control. how would I fix this?
This is actually a game changer 🎉🎉🎉
What I would like to do is inpainting woth controlNet. And what I mean is, I have an img with a pose, I remove one arm for inpainting and I pass another arm pose, and the inpainting is done with that new arm pose. Is this possible? What i found is not like that
Awesome Tutorial! Thank you soo much! However for some reason SD ignores the second controlnet and doesn't give me the pose I want. Any idea what the issue might be? please keep making more videos!
Hey there. Good content. Learning a lot on this channel! Thank you Sebastian.
How do I bring such a face (as here) into a generated image of say an assassin? Do I just carry on with my prompt as I would have? And bring a face image to controlnet?
I've tried it... I don't get it to render anything even close to the likeness of the input image 😥
This is quite amazing. Even better than using LORAs and the chance to combine LORAs, seeds and ControlNet with reference methods, NICE...
BTW... I was specting my "Wonderwall" dad joke. I'm very disapointed, mister Kamph (read it in a beautiful british Sean Connery angry tone).
You posted it after I recorded this. But I did find it very good! 😂😘
Thank you, this is very helful😉
Thnak you!
thank you for your work.
Thank you for your engagement! 🌟
Thanks, great video and straight to the point. Liked, subbed and commented !!!
The holy trinity! You're the real mvp 🌟
Very strange! I updated everything, turned everything on exactly the same way, uploaded a picture but the result is completely random. It does NOT WORK!
the same. it works only with some demo pictures (perfect face, no sobtle expressions, no background). And openpose miss the front/back pose 70% of time
Yes, same thing.
How did you get your styles menu subdivided like that? Is there an extension that does that or what?
Bro is the best, thank you so much for saving a ton of time
That was one the missing feature : the ability to keep the same character. Still not perfect but we are going there ! I now wonder if it will become possible to generate few good looking image and train a dreambooth on them. That way you can reuse the face only as an inpaint
wondering the same thing. things like coping over styles like a person's clothes, and patterns on the clothes etc to the generation imges. does midjourney remix do that?
10x for the update! Looks reassuring as not to have to learn how to train and fine tune. Wonder if you can just keep using the same reference face with ANY different scenario, thus we got ourself a character mapped by seed only
If you keep injuring yourself, it's time to book an appointment to learn some shelf improvement.
This looks amazing, I keep meaning to look into ControlNet more but never seem to get around to it. Cheers.
do you have any tutorials on how to create professional self portraits? I want to look pretty on Linkedin lol
There's actually one thing, I was thinking about... After version 1.1, ControlNet started to implement something new almost every week. First of all, new preprocessors. So, I'm pretty much curious about, when there is going to be an actual counterpart of Midjourney's Remix mode...?
I've followed the same steps but my pictures come out nothing like my original. I am enabling it, selecting 'reference only' but new pictures look like nothing like me
I wanted to ask you if there is a way to have two "Lora" in the same image, you know how to do it?, could you make a tutorial about it?, thanks
Thank you very much for another great video.
Wanted to ask about the styles, its the first time to see this, it there any videos that you explain what it is and how to use it or can you let me know here quickly about it?
Thank you
Check the pinned commend or video description. Install instructions and usage in that link
I wish I could use this on my pc. Just too limited on the GPU front. Been wanting to do a comic book but getting consistent characters in Midjourney is like pulling teeth.
has anybody figured out while there are no models coming with ControlNet v1.1.234? I tried to use it on this version and nothing worked -ControlNet just ignored for anything(canny, pose etc). I could not select any models for any pre-processor as Models dropdown list was empty. I have downloaded one model for openPose, put it in models folder in extensions. Now I can select this model for pose and it is all started working. I have installed ControlNet from Automatic1111 but it only puts yml files for models in the required folder but not actually models itself.
I'm actually doing even more crazy things with tile. But yeah, reference ones are great too.
Awesome video👍 Your computer is so fast in generating pictures, what are your hardware specs (cpu, gpu, RAM)?
hello, meanwhile thanks for your videos because I'm learning so much! I wanted to ask if there is a way to add real objects into an image, for example a model holding a real bag. Thank you
Hey Sebastian, loving your videos. I notice that I don't have any ControlNet Units in my UI. Any advice on why/how that is set up?
Settings - controlnet - multicontrolnet: It is set to 3 by default. If you set it to 2, it should work.
Great video, amazing tool!!
Thank you! ControlNet is so powerful, it blows my mind. And I'm not exaggerating.
It's still very clunky but we can see the future here. I want to be able to adjust like making a MMO character, then dress how ever i see fit, then put the character in any scene I want, in any pose i want, talking / singing / dancing /what ever. we are so close to that now it is so exciting!
There is controlnet that allows for easy outfit swap, my poor memory can't handle it's name it has 3 versions first end on 20 in name if that helps, anyway it detects what's on picture and paints it in corresponding color, then you just say you want person to have x outfit and it will change clothes but rest will remain unchanged
@@wykydytron Segmentation.
There's one thing about this pre-processor... It's more resource consuming. I'm generating a 512x768 image, and setting a Hires Fix to 2x. As soon, as it starts to render a Hires image, "NansException: A tensor with all NaNs was produced in Unet" error occurs. It starts to render an upscaled image only if I lower an upscaler to 1,6.
what is the difference between reference only and roop? thanks
Thanks again. First try failed, but will attempt again soon. ? After you get it to draw the character correct, can you then load a ref pic of costume only and use in-painting with it to give the character a chosen costume?
In my case the reference_only isn't respecting the style of the model, the results is always too realistic. I would like to use recpecting the style I want
What a gamechanger...my goodness!
I don't have the ControlNet Unit 0 and ControlNet Unit 1 tabs. I only have "single image" and "batch" and nothing above that. Have I done something wrong? I've checked that everything is up to date.
Settings - controlnet - multicontrolnet: It is set to 3 by default. If you set it to 2, it should work.
You are on the top of your game Seb! Go king!
Thanks superstar! 🌟
Hi Sebastian, your videos are amazing!! Thanks very much, i have a question for you. Do you think it's possible to make an Ai model wear a real dress? For example if I have the ghost mannequin photo of a dress can i generate a worn photo with an AI model? Please let me know, i'm new in this fild anch I think this could be very useful
Omg this is incredible
I can't disagree 😅
I can't get this to work, it just send a bunch of error messages my way and end with ''TypeError: unhashable type: slice
Does it work with multiple pictures of reference?
for the posing , im thinking we can also extract a pose from an image ?
Best channel. period.
Great videos. But I always have to crank up the sound to max to listen. 😊
i have 1.66 - but it will only copy the pose - the person looks nothing like the original photo...any ideas why?
Same issue here! does exactly motivate me at all to continue this! have you found out why?
Thanks!
Wow, thank you once again! Real mvp material. 💫
Do I have to follow this procedure if I want to take one image from image 2 image window and apply on it open pose to get different variations? Or is there a simpler way!
Hello there I have a question regarding controlnet. I have seen, using a 3d model you can make poses and use openpose to extract it, now in this video I have learned, that you could use any face as a reference and even combine it with the openpose. Now my question is: I do have a whole finished 3D Model of my Character eg. a 3d anime character in blender, I would pose it and it has its own face and clothing. Now I would pose my 3d model and have a picture of it. How can I use Controlnet so it would use the reference picture, and generate a image with the same face and clothing? Is there any way?
How is this different from image2image? I played with it and dont see a diference
With faces result vary, from very impressive almost 1:1 copy to completely messy IMG, but if you add proper Lora for that person it has about 90% of accuracy but what i love about it most is that you can use it as style definition, you don't need Lora, just put IMG in style you aim for and your done, doesn't even matter what's on that image it will do great job in doing style. It's also very easy to achieve dark low light images when you put dark low light img as source. Honestly CN will dominate everything untill someone makes similar ai but one that does understand math, gender, individuals and how human body can or cannot band.
What do you recommend then for creating new characters that only exist as a single image to start? I was thinking of using CN and cherry-picking the good results to create a LoRa out of.
perfecto !
My laptop has only 4gb vram so not a good start already😅 but i was able to generate at 1024×1024 resolution but after updating automatic 1111 i can't generate above 512×512 , and also can't use controlnet, everytime the vram usage goes through the roof. Then i upgraded to torch 2.0 but still it didn't help.
Torch 2 definitely decreased my generation time though ngl.
What should i do?? I want to use controlenet.
Something wrong, i have latest version of ControlNet, but images come out absolutely different from my control image.
2:05 How do you make that STYLE list?
EDIT: Nevermind, I check your PATREON link.