fantastic. thanks for the guide. I just wonder why you choose "default negative" aside from "digital painting", i am new to this and ive never used "default negative", what does it do? EDIT: it worked well with the word "cowboy" but when i type "green monster boy" the result becomes a mess and not at all like the cowboy ballerina
Hello, I would like to ask this question. Why do I set openpose_full and control_v11p_sd15_scribble [d4ba51ff] then I get the skeleton of the image (source) then I click generate, but the image is created according to the prompt without a skeleton, it’s just created according to the prompt
I have the Canny button but there is no model. I went to the model folder and nothing is there. All that I have is the openpose. Could you post a link to the Canny model?
I usually do manual inpainting for fingers instead, Much more accurate that way and gives us humans something left to do. (Lol.) ControlNet I'll use for when I want a real photo as the pose reference or a specific architecture as background because you can layer these together.
fantastic. thanks for the guide. I just wonder why you choose "default negative" aside from "digital painting", i am new to this and ive never used "default negative", what does it do?
EDIT: it worked well with the word "cowboy" but when i type "green monster boy" the result becomes a mess and not at all like the cowboy ballerina
Hi there!
Default negative is a preset of negative prompts that works well for most of the images in sd 1.5. That is why Sebastian chose it :)
Hello, I would like to ask this question. Why do I set openpose_full and control_v11p_sd15_scribble [d4ba51ff] then I get the skeleton of the image (source) then I click generate, but the image is created according to the prompt without a skeleton, it’s just created according to the prompt
seems to have more control than mid journey nice!
Yes, with Stable Diffusion you have way more control than any other AI image generators:)
Mantap... Lanjutkan tutorial A1111 saya sangat menyukai
Thank you for the positive feedback, more tutorials coming every week!
I have the Canny button but there is no model. I went to the model folder and nothing is there. All that I have is the openpose. Could you post a link to the Canny model?
Hi there!
Of course, here you go: huggingface.co/lllyasviel/sd-controlnet-canny
Happy generating!
I usually do manual inpainting for fingers instead, Much more accurate that way and gives us humans something left to do. (Lol.) ControlNet I'll use for when I want a real photo as the pose reference or a specific architecture as background because you can layer these together.
Good point, yes it can be more accurate and if you enjoy the process of doing it that's all that matters!