Holy crap this is great. I'm 6 days down the rabbit hole of A1111/Stable Diff and I can't get enough. I've been looking for this exact video! Thank you!
@@Geekatplay I just now looked up a RUclips video about stable diffusion, and brought me back here brother. The algorithm knows where to take me for education. Its so good.
looks like it's set up differently now, you have to check the box that says allow preview and then click run preprocessor (the little explosion icon next to the preprocessor field)
I followed it entirely, I am getting my face pasted on the generation, ( I just want it to keep the structure of myface) it's not blending the face with image, how to do that, which settings to adjust, please help
Hello, great video. However, I'm not sure how you got the ControlNet section there and the models? Can you add an explanation for that? There are many results when searching for it and in the link you provided and there is no explanation about that. Thank you.
Wow, this is amazing! I updated the Civitai page to announce that I started training RPG V5.0. I will ship that version with a set of Control Net image to help people have more control on the model.
Vladimir Thank you brother, you are great And all settings are complete Best person ♥️ I hope you also look into the topic of the video freme I wanted to get a more realistic animation setting With the same settings as this video
Love the video thanks, but when I use inpaint to paint the face and click generate with the same settings, it just puts the face on a random place on the image, and does not replace the face.
@@tstone9151 I use the whole Affinity suite for a bunch of stuff. It's not really AI driven, just a photoshop/lightroom alternative (in the case of photo)
Thanks for this Tutorial. But I can't find the Model under the Preprocessor. I think i ticked all the right stuff in ControlNet and restarted the UI. Any suggestions?
How do you find out what size the model is trained on to get best results? I'm finding adjusting the size proportions of the canvas really drastically effects my image output/
great video really helped me understand how to keep the face structure, is it possible to do inpaint batch in order to create videos, that retain the face structure? working on your other video on created flicker free video and wanted to use this feature to keep the face structure consistent with my models face.
Hey, thanks for the tutorial it helped a lot! But I have a quick question, how to make the face and the rest of the body have matching colors and tones? Which settings do I need to change? Thanks!
He didn't mention it in the video, but there is another controlNet model simply called 'color' (search for t2iadaptor color) that will make a mosiac grid like sampling of your source image colors and apply them to the generated image
I noticed something about the your Stable Diffusion setup when you were using the ControlNet features There was a LORA feature just above the ControlNet menu How do I get that on my SD ?
Awesome tutorial man. Some people have recommended me stable difussion as the most accurate image2image tool, currently I'm using Midjourney and it always changes the facial features of my character. Can you tell me if stable difussion is really accurate with character consistency most of the time? Or is it as tricky as midjourney?. Thanks for your time .
I don't know why when I use the impaint it completely ignores the previous controlls for the pose and just paste the face on a completely different image randomly.
Well, I installed everything as well as I could, but after inputting a control net image I can't see "Preview annotator result". There's just nothing there.
Preview button is not visible now when u try the tool see there is a icon looks like this 💥 this is the preview button just enable preview and touch this icon ur preview will be there…
Bravo! Thanks Vladimir
*Thank you for your support!*
This is the best tutorial i saw about how to use it. Realy great.
Holy crap this is great. I'm 6 days down the rabbit hole of A1111/Stable Diff and I can't get enough. I've been looking for this exact video! Thank you!
This could save the trouble of training models for different faces. Very helpfull! thanks
Absolutely!
From an art point of view/perspective, Vlad is the best A.I. mentor on RUclips, by far.
Thank you for your support!
His equivalency to A.I. art is of Da Vinci modern age. Imagine if Vlad lived in Da Vinci's time. 🤔
@@Geekatplay I just now looked up a RUclips video about stable diffusion, and brought me back here brother. The algorithm knows where to take me for education. Its so good.
This is an amazing workflow Vladimir, great job! So many people fighting to get exactly this for so long. Again, great job!
thank you
THANK YOU!!, That face trick has been something I've been trying out for months, now I can better make portraits!!
Great to hear!
Thank you my friend
09:10 was it possible to use openpose_full instead of inpaint, which also captures the face?
amazing thx, you explained it very well
Thx so much! That's a super nice tutorial.
You're welcome!
Владимир спасибо большое. Отличный тутониал. Буду пробовать что то похожее
Spasibo!
牛逼!!? 你太厉害了。我研究了半天,看到你这个视频我终于学会了。谢谢你。👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻
thank you!
Thank you 😮 master , you're goat ❤
Thank you very much for the tutorial. Went to find those model you used.
thank you
brilliant video !! thanks
thank you for your support!
Amazin amazing content,thank you
Glad you enjoy it!
Genius. The video and workflow technique are very much appreciated!
Glad it was helpful!
Is it should work for each checkpoint?
No, check points need match other components in how it was trained
@@Geekatplay got it, cause I tried on what I already had and it was not working. Thnx
Exactly what I was looking for. Thank you.
thank you
The button Preview annotator result dont show. Any tip to show this option? (Controlnet 1.1.02)
Thanks for mentioning this. Same issue for me ControlNet v1.1.112
I have the same problem, did you solve it?
looks like it's set up differently now, you have to check the box that says allow preview and then click run preprocessor (the little explosion icon next to the preprocessor field)
it was changed, now it is small icon, on right side from drop down box. look like spark.
I followed it entirely, I am getting my face pasted on the generation, ( I just want it to keep the structure of myface) it's not blending the face with image, how to do that, which settings to adjust, please help
I'm lost for words. Subscribed. This is too accurate and detailed to be free
thank you
Hello, great video. However, I'm not sure how you got the ControlNet section there and the models? Can you add an explanation for that? There are many results when searching for it and in the link you provided and there is no explanation about that. Thank you.
❤❤❤ great
thank you
Thank you sir for sharing your knowledge with the world! I fully watch all ads for you😂😅
how to get composable lora on the bottom?
Such a great trick.❤ Watching these vids makes me realize that I'm still a noob when it comes to SD. 😉
quality walk through -- can you explain in more detail what the Lora configuration means and what it is doing? Thanks in advance
thank you!
Хоть ролик и пошаговая инструкция к портретам, у вас получилось мимоходом объяснить работу многих параметров. Спасибо за видос.
thanks for the tutorial.i don't find Control sdi 15 canny.where do i can download it? thanks.
Genius. The video and workflow technique are very much appreciated! thank you
thank you for your support!
Wow, this is amazing! I updated the Civitai page to announce that I started training RPG V5.0. I will ship that version with a set of Control Net image to help people have more control on the model.
thank you
What video card do you have running to be able to get results that fast with all these controlnets and script running?
Vladimir Thank you brother, you are great And all settings are complete Best person ♥️ I hope you also look into the topic of the video freme I wanted to get a more realistic animation setting With the same settings as this video
I will check it out
@@Geekatplaythanks I really appreciate this 😘
Love the video thanks, but when I use inpaint to paint the face and click generate with the same settings, it just puts the face on a random place on the image, and does not replace the face.
be sure you set correct masking, it may be inverted
Did you follow the prior steps to match the pose first?
It's amazing. Man, do you think that is possible apply this techinque for food photography or productos?
i will try that.
Very cool, and helpful! Have you figured out a way to make the in-painted face match the style of the rest of the picture?
Yes...using Affinity Photo you can do just that!
Could do another img2img on low denosing with the controlnet.
@@cryptojedii you mind linking a tutorial? Thanks for the recommendation of affinity photo, never heard of it
@@tstone9151 I use the whole Affinity suite for a bunch of stuff. It's not really AI driven, just a photoshop/lightroom alternative (in the case of photo)
Excellent!! Thanks!
Great work! Thanks so much, very comprehensive!
Amazing! I think the inpaint will solve my lipstick issues for singing videos! And I could learn more about the control nets!Thanks a lot
great lesson, learned a lot , thanks
thank you
Thanks for this Tutorial. But I can't find the Model under the Preprocessor. I think i ticked all the right stuff in ControlNet and restarted the UI. Any suggestions?
you need be sure models located in correct folder, i will make video about it
@@Geekatplay I dont have this model too, can you please put link for it and just write where to put model in what directory/ folder
how you get that prompts ? is there any tool or site for good prompts ?
yes, i will release video soon about creating prompts ( prompts generators)
Another great video! thanks!
Thanks again!
The video looks awesome by generating portraits. May I know program is this?
Stable Diffusion, local installation. ruclips.net/video/oTrmgXuc3e8/видео.html
Very Nice tutorial about AI workflow .
Glad you liked it!
hello Vladimir Beautiful tutorial, only I don't have the "preview annotator result" button in the control net section. Do you know how I can do it?
in new version it is icon, looks like spark, next to the preprocessot drop down
Smoooth 👍
Thanks 💯
please upload same tutorial with new version lot of different getting confuse for me not showing preview option
congratulations on the job! I can use this technique to create pets?
thank you. i will make video specifically about pets, and yes it does works. I made a lot of photos/videos with my Border Collie
🎉🎉🎉
Can this method be used for architectural rendering?
yes, if you using ControlNet model with architectural preprocessing. can not recall from top of my head, but i will check and post.
@@Geekatplay If you make a post about architecture, that would be great
thank you for suggestion, i will
Thank You
You're welcome
I do not have control net in my settings ?
you need install as extension first
Very nice video explainer.
Glad you liked it
How do you find out what size the model is trained on to get best results? I'm finding adjusting the size proportions of the canvas really drastically effects my image output/
it is in model description if you downloading from Hugingface or Civit.ai
@@Geekatplay Thanks for the reply .. Found out all the specific sizes for the model I was using. Turns out I was using an outdated version of sdxl
why i dont have upload image in controlnet img2img ?
great video really helped me understand how to keep the face structure, is it possible to do inpaint batch in order to create videos, that retain the face structure? working on your other video on created flicker free video and wanted to use this feature to keep the face structure consistent with my models face.
thank you, it is possible, but you will need to load masks for in-painting
I really enjoyed this video. All of your videos are great. Thanks.
thank you for your support!
WOW! You have me very excited. I need to see where to get started with this. Looks exactly like what I want to start doing! Liked and subscribed!
You can do it!
hello Vladimir. I appreciate your videos 👍
thank you!!!
Hello, i have missing Preview annotator results (create blank and hide annotator also) in control net. Is there something what can i do ?
click on small icon next to preprocessor selection
Hi, it looks like , version of Imac is differente of windows ! how to install it in windows ?
How are you using stable diffusion like that?
it is Automatic1111 installation (UI) and control net
great video. is it possible to replicate the same face from the input?
Yes, absolutely
@Geekatplay I am struggling with it for several weeks now. Do we need to mask and generate again for face and features ?
you have option to invert mask for inpainting. you can send me email with problem. need more info on what you trying to do
I just cant get anywhere i have image A and when i generate someting in image to image i get a cow! for example!
Are you using Automatic1111 gui? You're very simular to mine but I don't have controlnet.
you need install it in extension tab
Aren't those controlnet modules unsafe due to pickle imports being detected?
they are using some calls, that may misused, it is why i usably check phyton code it self, if code not in the safeguard settings
Thanks. I was looking for a tutorial about this.
Glad I could help
Hey, thanks for the tutorial it helped a lot! But I have a quick question, how to make the face and the rest of the body have matching colors and tones? Which settings do I need to change? Thanks!
Yea i was thinking the same
He didn't mention it in the video, but there is another controlNet model simply called 'color' (search for t2iadaptor color) that will make a mosiac grid like sampling of your source image colors and apply them to the generated image
use full body in imported image
@@Geekatplay could you please explain in detail? how is this done?
very cool👍
thank you for your support!
Top!
thank you!
name of the tool kit pls
Very interesting
thank you
Could you do this with an photo of a building or house. Keeping an acuate representation of the subject and placing it in a different environment?
How is your computer so fast? Great stuff, thanks for sharing!
I noticed something about the your Stable Diffusion setup when you were using the ControlNet features
There was a LORA feature just above the ControlNet menu
How do I get that on my SD ?
excellent, really nice
Thank you! Cheers!
Hello, I ve installed Control Net, but I cant see the "Preview annotator result" buttons. Should I install another extension or what?
in newer version it is look like spark icon, next to the preprocessor drop down selector
@@Geekatplay Ok I got it, thanks!
Very good tutorial!
thank you
Wow! This is amazing! But how to get this crazy tool? This is not Leonardo or Stable Diffusion website?
This is Stable Diffusion on local installation, check my channel for the videos how to install it
good skills 👋
thank you
Awesome tutorial man. Some people have recommended me stable difussion as the most accurate image2image tool, currently I'm using Midjourney and it always changes the facial features of my character. Can you tell me if stable difussion is really accurate with character consistency most of the time? Or is it as tricky as midjourney?. Thanks for your time .
thank you
i have all the same settings as you but when im in "inpaint" it just generates the face but doesn't keep the body or background? why is this
how to install composable lora?
i need make video about it
@@Geekatplay I think I just found it. But thanks, if you want to make an explanation about it go ahead please
what is the website he is using to do stable diffusion?
it is local installation: ruclips.net/video/oTrmgXuc3e8/видео.html
Great video👏👏👏 subsciribed👍
thank you
I don't know why when I use the impaint it completely ignores the previous controlls for the pose and just paste the face on a completely different image randomly.
be sure to check, what area of inpainting you want to use, it should be, inpaint mask area or inpaint not masked area
@@Geekatplay thank you!!!
where i can found controlnet extension???
they are in "extensions" tab
@@Geekatplay ok ty, i try, but if style of model is unreal, like anime or other, how to change the style of the face?
Well, I installed everything as well as I could, but after inputting a control net image I can't see "Preview annotator result". There's just nothing there.
same issue i do not have the preview result button
@@lost-frequency have you find the solution?
@PaonSol and @Lost Frequency Band
Press the "allow preview" checkbox, then a little boom button will appear next to your choice of preprocessor
Preview button is not visible now when u try the tool see there is a icon looks like this 💥 this is the preview button just enable preview and touch this icon ur preview will be there…
How was this software setup? What's the install process?
it is Stable Diffusion, Automatic1111 installation.
how do i configure rpg4 model?
link to de manual in description. they have recommended settings in there
can this be achieved using leonardo or midjourney?
not yet
Nice nice thank you
Thank you too!
Hey man, great video but I just can't manage to get full-body persons. It's always cropped to head or upper body. Any ideas what I can do?
This might help, I changed the first prompt to ‘full body pose’ and it gives a near on full body.
it was originally 2/3 photo. for full body can have tricks to add (hat) (shoes) (floor) (sky) ..etc something above object and below
@@Geekatplay ah nice, that sounds smart. thanks!
thank you !!
any tiime
where do you get the checkpoint and how do i install please?
you can copy checkpoint to the model folder
How to install this software?
check this video: ruclips.net/video/oO3zIfH4LRE/видео.html
I tried this on my phone and it work but i cant find the option to use my own picture, where is it?
how can we batch inpaint? for the purpose of processing png sequences
you can create multiple masks and load them as batch