Wow, I've been looking for a tutorial on how to enhance sketches with Artificial Intelligence for two months now. The search ended with your video. Thank you very much!
I say I am using Google Colab to run Stable Diffusion. Note: when I made the video, Google Colab was free. If you have a powerful computer you can run it locally.
Very helpful tutorial and this showed me the usefulness of the X/Y/Z plot! Also, perhaps you already know this now, but I noticed the double dog snout near the end and I read somewhere that this could be caused by your final canvas not being a true 512x512 image or multiple of 64, since I think stable diffusion is trained completely on 512x512 images.
I'm having difficulty converting a color painting into something photo realistic.. almost seems like I have better luck with black and white line drawings vs color painting.. any suggestions are appreciated 🥴
@@LaCarnevali will try it out... haven't used softedge yet. getting a little better but have still not had a breakthrough with sharp images that maintain the same structure. But, I have noticed adjusting prompts can help. I'll report back if I have any luck 😀
@@CoconutPete should also use a combination of controlnet. yeah let me know how it goes - not easy sometimes to get good results. Make sure you are using a good checkpoint (also consider that SD XL controlnet are not as good as SD 1.5)
Hi Laura, I don't seem to have any of the ControlNet enabled or done something something wrong somewhere since I don't have the ControlNet selection pane in my options window. Also I don't seem to have StableDiffusion checkpoint v1.5 model.ckpt rather than the model.ckpt I see in your screen ... Care to help out here very please?
Hi Hens, thank you for the comment - next time, I ll add this step :) Are you running it locally? When you run it locally, you need to download the ControlNet extensions. You need to "git clone" the ControlNet repo into the "stable-diffusion-webui > extensions" folder. After that, you need to download the safetensors from Hugging Face and move them to the "stable-diffusion-webui > extensions > sd-webui-controlnet > models" folder. ControlNet repo: github.com/Mikubill/sd-webui-controlnet Hugging Face safetensors (under the file and versions tab): huggingface.co/webui/ControlNet-modules-safetensors/tree/main If the above is not clear, I m explaining how to do that in this video at min 9.52: ruclips.net/video/SktvO_OtnOQ/видео.html The model.ckpt you see in the video is the v1-5 model but renamed
Very detailed and I liked the theoretical part. I understood now why I need to use x/y plot. If you could do also a video about creating ads in SD it will be great to look. I make cups from gypsum but not really good in product photography. I want to have a good looking Instagram with my goods. Found few videos about this topic but they are not working for me. So if you are searching for a new topic - maybe this is it.
Hi Lucas, you need to download scribble from HuggingFace using this link: huggingface.co/lllyasviel/ControlNet-v1-1/resolve/main/control_v11p_sd15_scribble.pth Then, you need to move the file in the models folder (see path below): stable-diffusion-webui/extensions/sd-webui-controlnet-main/models If you don't have the sd-webui-controlnet-main extension, you can git clone this repository: github.com/Mikubill/sd-webui-controlnet (you can watch this video at minute 9.52): ruclips.net/video/SktvO_OtnOQ/видео.html
Hi Aleksandre, Are you running it locally? When you run it locally, you need to download the ControlNet extensions. You need to "git clone" the ControlNet repo into the "stable-diffusion-webui > extensions" folder. After that, you need to download the safetensors from Hugging Face and move them to the "stable-diffusion-webui > extensions > sd-webui-controlnet > models" folder. ControlNet repo: github.com/Mikubill/sd-webui-controlnet Hugging Face safetensors (under the file and versions tab): huggingface.co/webui/ControlNet-modules-safetensors/tree/main If the above is not clear, I m explaining how to do that in this video at min 9.52: ruclips.net/video/SktvO_OtnOQ/видео.html
O beautiful and kind lady. Help me how to improve an old photo in this artificial intelligence so that the basic characteristics of the photo do not change and at the same time the details of the photo look extremely natural. If there is a tutorial, I am very eager to make it because I have been looking for such a feature for years. But whatever I do, the characteristics of the photo change, and if I reduce the noise, the details of the photo are reduced. Sorry, I wrote this text with Google because I don't know English.
Hi, no worries. Img2img is the way to go. The prompt description is important. You can then use Codeformer to improve the quality. For face expression, you can use ControlNet mediapipe. You might want to watch this video: ruclips.net/video/DCfhLtv2IRk/видео.html
You aren't really supposed to use a high quality detailed sketch for ControlNet, it can understand it's a dog from a much more simple drawing, you just need a simple sketch that captures that basic elements of a dog and minimum amount of details necessary, the examples on the ControlNet Huggingface page show that clearly.
The canvas width and height in the controlnet section are only there for when you create a blank canvas to sketch .They make no difference to the controlnet settings .
Wow, I've been looking for a tutorial on how to enhance sketches with Artificial Intelligence for two months now. The search ended with your video. Thank you very much!
Nice! ☺️
Brava, bella spiegazione. Questo è lo strumento che più di altri in controlnet fa arrabbiare i fotografi vecchio stampo.
i don't know how much i thank you, thanks a lot, i've been looking for this, easy to understand! thank you so much again
very useful, thanks alot!
🤯🤯🤯 AMAZING!! Thanks Laura!
This is a great tutorial. very thorough
Very helpful, thank for all your tutorial.
thank you for this tutorial. i love your channel! subbed.
Super helpful thanks!
so much to learn with this channel
Hi Laura, thank you so much for the tutorial. Does it work on an iPad Air?
At 2:20 you said you’re using something to run it. What did you say? I can’t understand. Thanks in advance 👍
I say I am using Google Colab to run Stable Diffusion. Note: when I made the video, Google Colab was free. If you have a powerful computer you can run it locally.
very good video, thank you!
Thank you for documenting the workflow so well.
just found your channel. great tips and advice. subbed! thank you for the info
Very helpful tutorial and this showed me the usefulness of the X/Y/Z plot! Also, perhaps you already know this now, but I noticed the double dog snout near the end and I read somewhere that this could be caused by your final canvas not being a true 512x512 image or multiple of 64, since I think stable diffusion is trained completely on 512x512 images.
Thank you Luara just installed. How can I link my output to google drive, or have my output folder to export to my external dive?
I'm having difficulty converting a color painting into something photo realistic.. almost seems like I have better luck with black and white line drawings vs color painting.. any suggestions are appreciated 🥴
try to use a different controlnet model, like softedge
@@LaCarnevali will try it out... haven't used softedge yet. getting a little better but have still not had a breakthrough with sharp images that maintain the same structure. But, I have noticed adjusting prompts can help. I'll report back if I have any luck 😀
@@CoconutPete should also use a combination of controlnet. yeah let me know how it goes - not easy sometimes to get good results. Make sure you are using a good checkpoint (also consider that SD XL controlnet are not as good as SD 1.5)
for some reason my canvas isn't showing a color... when I draw... hmm
Hello!! Thank you very much!! Where I can find the program?
What program? Scribble? huggingface.co/lllyasviel/sd-controlnet-scribble
Hi Laura, I don't seem to have any of the ControlNet enabled or done something something wrong somewhere since I don't have the ControlNet selection pane in my options window. Also I don't seem to have StableDiffusion checkpoint v1.5 model.ckpt rather than the model.ckpt I see in your screen ... Care to help out here very please?
Hi Hens, thank you for the comment - next time, I ll add this step :)
Are you running it locally? When you run it locally, you need to download the ControlNet extensions. You need to "git clone" the ControlNet repo into the "stable-diffusion-webui > extensions" folder. After that, you need to download the safetensors from Hugging Face and move them to the "stable-diffusion-webui > extensions > sd-webui-controlnet > models" folder.
ControlNet repo: github.com/Mikubill/sd-webui-controlnet
Hugging Face safetensors (under the file and versions tab): huggingface.co/webui/ControlNet-modules-safetensors/tree/main
If the above is not clear, I m explaining how to do that in this video at min 9.52:
ruclips.net/video/SktvO_OtnOQ/видео.html
The model.ckpt you see in the video is the v1-5 model but renamed
Nice
i cant change the colour to draw with in the canvas? just white not black
canvas for controlnet is only white and black as you are building a mask
great!
Very detailed and I liked the theoretical part. I understood now why I need to use x/y plot.
If you could do also a video about creating ads in SD it will be great to look. I make cups from gypsum but not really good in product photography. I want to have a good looking Instagram with my goods. Found few videos about this topic but they are not working for me. So if you are searching for a new topic - maybe this is it.
My scribble is not showing at the list, even stalling the lastest version, but the Model Scribble shows, does anyone can help me?
Hi Lucas, you need to download scribble from HuggingFace using this link:
huggingface.co/lllyasviel/ControlNet-v1-1/resolve/main/control_v11p_sd15_scribble.pth
Then, you need to move the file in the models folder (see path below):
stable-diffusion-webui/extensions/sd-webui-controlnet-main/models
If you don't have the sd-webui-controlnet-main extension, you can git clone this repository:
github.com/Mikubill/sd-webui-controlnet
(you can watch this video at minute 9.52):
ruclips.net/video/SktvO_OtnOQ/видео.html
Hi, I Can't find ControlNet button
Hi Aleksandre,
Are you running it locally? When you run it locally, you need to download the ControlNet extensions. You need to "git clone" the ControlNet repo into the "stable-diffusion-webui > extensions" folder. After that, you need to download the safetensors from Hugging Face and move them to the "stable-diffusion-webui > extensions > sd-webui-controlnet > models" folder.
ControlNet repo: github.com/Mikubill/sd-webui-controlnet
Hugging Face safetensors (under the file and versions tab): huggingface.co/webui/ControlNet-modules-safetensors/tree/main
If the above is not clear, I m explaining how to do that in this video at min 9.52:
ruclips.net/video/SktvO_OtnOQ/видео.html
@@LaCarnevali Thank you
O beautiful and kind lady. Help me how to improve an old photo in this artificial intelligence so that the basic characteristics of the photo do not change and at the same time the details of the photo look extremely natural. If there is a tutorial, I am very eager to make it because I have been looking for such a feature for years. But whatever I do, the characteristics of the photo change, and if I reduce the noise, the details of the photo are reduced. Sorry, I wrote this text with Google because I don't know English.
Hi, no worries. Img2img is the way to go. The prompt description is important. You can then use Codeformer to improve the quality. For face expression, you can use ControlNet mediapipe.
You might want to watch this video:
ruclips.net/video/DCfhLtv2IRk/видео.html
@@LaCarnevali 🖤🖤☺
to me, the final image looks worst than the original sketch. That sketch has artistic flare.
You aren't really supposed to use a high quality detailed sketch for ControlNet, it can understand it's a dog from a much more simple drawing, you just need a simple sketch that captures that basic elements of a dog and minimum amount of details necessary, the examples on the ControlNet Huggingface page show that clearly.
@@AscendantStoicagree i think her sketch better with depth model.
x1.5
Meno male che tu speak english, because i avevo paura di don't understand una ceppa 😂
ahahahah!!!
@@LaCarnevali comunque grazie mille, il tuo video è MOLTO utile 😁
Will this work with sdxl 1.0 yet?
not yet
The canvas width and height in the controlnet section are only there for when you create a blank canvas to sketch .They make no difference to the controlnet settings .