Thank you, this has been so helpful. ComfyUI is very user friendly once you get past the learning curve (which you have shown me), and you have absolutely made that curve so much shorter. On to L5!!!
Thanks for these videos, you explain them well. I'm fairly new to Comfy (coming from A1111) and primarily want to run my own digital art through AI, so these tutorials are helpful. :)
this is pretty much what pushed me to switch to comfy ui, thank you! trying to get latent couple & composable lora masking to work correctly in a1111 or forge was driving me nuts 😂
Thank you so much Olivio! It is the first time I understand how img2img works. Loaded images should have the size that we want to generate?(e.g 512x768)
I keep producing very blurred images for unknown reasons with the exactly same workflow, I've tried switch different checkpoints and vaes. Anyone has any ideas?
This happened to me too. Increase your steps in the sampler. Also increase the cfg and that should iron things out. It will take much longer the greater the increase but the clearer it will be.
It has the "Load VAE" node. What VAE should I use and where can I get it? I see there are some VAE in civitai (a lot less than regular models), but I am not sure what to use.
I use a Lora trained on my own face. It seems that the prompt has to be almost the same as the example prompt, provided by the Lora generator. To have the recognizable face. When I use control-net, the face becomes different, and doesn’t look like me anymore Is there a solution for this?
@OlivioSarikas After looking in all the nodes, I was not able to find LatentBlend. I do see that you have it in your example workflow. Would it be under Post Processing?
Hi, thanks a lot for your tutorial, but when I run this lesson workflow in cloud, it shows below error: Prompt outputs failed validation LoadImage: - Custom validation failed for node: image - Invalid image file: 04 (3).jpg How to fix this?
Cool. I found that if you drop your output image back into the image loader and render again, adjusting denoise, you can get some really great results and repeating that process over and over you can get some very interesting results. :O)
"often the models are trained in a way that it's always doing the same thing, everything is centered" then goes ahead and generates a centered image of another pretty girl.
Find my Workflow here: openart.ai/workflows/oliviosarikas/lesson-4---comfy-academy/33ECh584TbdXjPyitkff
Other Lessons:
L1: Using ComfyUI, EASY basics : ruclips.net/video/LNOlk8oz1nY/видео.html
L2: Cool Text 2 Image Trick in ComfyUI : ruclips.net/video/6kHCE1_LaO0/видео.html
L3: Latent Upscaling in ComfyUI : ruclips.net/video/3W-_B_0F7-g/видео.html
👋
i have no words to describe my gratitude. Definetly going to buy you some coffe
Thanks!
Thank you, this has been so helpful. ComfyUI is very user friendly once you get past the learning curve (which you have shown me), and you have absolutely made that curve so much shorter. On to L5!!!
Outstanding tutorials! Easy to understand and follow. :-) I hope to see more of them :-)
This is so cool!!!!!!!!!!!
I see why people prefer Comfy UI, you have much more control.
I don't wanna go back to A1111 anymore, myself XD
Reminds me so much of DiVinci Resolve
I want the next lesson already!! thanks, Olivio!!
Catching up slowly hehe.
Amazing lesson and great instructions.
Lets go!!!
Thank you so much for the detailed video! it helps me a lot! :)
Thanks for these videos, you explain them well. I'm fairly new to Comfy (coming from A1111) and primarily want to run my own digital art through AI, so these tutorials are helpful. :)
that is soooo cool, great job man!
I love you and your tutorials, so much man! 💛👏🏻
this is pretty much what pushed me to switch to comfy ui, thank you! trying to get latent couple & composable lora masking to work correctly in a1111 or forge was driving me nuts 😂
What would be the best way to create a sketch image like you using it? Peraps even directly within ComfyUI? Thanks for you video!
amazing !
Thank you so much Olivio! It is the first time I understand how img2img works.
Loaded images should have the size that we want to generate?(e.g 512x768)
amazing
I keep producing very blurred images for unknown reasons with the exactly same workflow, I've tried switch different checkpoints and vaes. Anyone has any ideas?
This happened to me too. Increase your steps in the sampler. Also increase the cfg and that should iron things out. It will take much longer the greater the increase but the clearer it will be.
It has the "Load VAE" node. What VAE should I use and where can I get it? I see there are some VAE in civitai (a lot less than regular models), but I am not sure what to use.
I use a Lora trained on my own face. It seems that the prompt has to be almost the same as the example prompt, provided by the Lora generator. To have the recognizable face.
When I use control-net, the face becomes different, and doesn’t look like me anymore
Is there a solution for this?
Does the vae be different if im using different model?
Could you put 2 images, 1 for the character and 1 for the background, then blend them to make the final image?
@OlivioSarikas After looking in all the nodes, I was not able to find LatentBlend. I do see that you have it in your example workflow. Would it be under Post Processing?
Hi, thanks a lot for your tutorial, but when I run this lesson workflow in cloud, it shows below error:
Prompt outputs failed validation
LoadImage:
- Custom validation failed for node: image - Invalid image file: 04 (3).jpg
How to fix this?
Hi,I try to inpaint some place on a photo, but I don't want any changes on other place,How should I do?
the latent image method is interesting, but doesn't it lack control compared to open pose or canny?
I know this is probably the most basic of questions but, What does the () do in a prompt, assign detail to the prior word?
@@JM-yn2lw it makes the word more important and forces it to add it
Do you run on the cloud or your own pc?
Cool. I found that if you drop your output image back into the image loader and render again, adjusting denoise, you can get some really great results and repeating that process over and over you can get some very interesting results. :O)
Maybe with this "course" I can give ComfyUI another run.
Stop generating people and portraits, do something more complicated)
Agreed, but also thanks so much Olivio these have been really amazing as a first dip into stable diffusion for me.
"often the models are trained in a way that it's always doing the same thing, everything is centered" then goes ahead and generates a centered image of another pretty girl.
People complaining about free content. Hey, go pay for it then you can tell the guy what to do eh?
Gracias Olivioooooooo
I got a nsfw result