I've been so annoyed looking around for a ComfyUI tutorial where someone could just explain this to me in plain, simple English from scratch without being clueless in terms of how to teach and be straightforward... This, on the other hand, is the polar opposite... You are a GODSEND! This is exactly what I've been looking for, for like 4 days... this is so great! You're a really good teacher btw, thank you so, so much for this. I'm finally understanding Comfy, and it's not hard now that I have found someone who knows how to communicate... Thanks for this 💚
This video probably won't do as well as your others, but for the love of god, please consider doing further tutorials on comfyUI. This is the first tutorial that is understandable and comfortable to follow. Further Ideas: Flux specific tutorials, Inpainting, character consistency.
I have watched many comfyUI tutorials. Some of them are good, but this is definitely the most comprehensive and easy to follow tutorial I have seen so far. 👍
I really appreciate that you don't just say "do this, click that, put this value in" you actually explain what each parameter does, and how adjusting the value up or down will affect the outcome. Thank you very much! I also agree with many others on here that you should upload tutorials on any other comfyui/Ai topics you think would be beneficial for us noobs! Thanks again
I have like a hundred tabs open on multiple windows and this video was one of them. I was already badly disappointed by various other crash courses and tutorials on RUclips about ComfyUI but still mustered up the courage to watch this and I got so hooked within the first few minutes - its just incredible. In fact for a second I switched to another tab to answer an email but then I lost which tab this video was and I was so pissed but luckily I found out this video again. Your style of explaining is so great! you should do more of these.
Thanks a lot for this tutorial! For those who may have the same problem : I had a hard time with openpose model, even with the exact same parameters as shown in the video. I found that using the model ControlNet model "SDXL-controlnet: OpenPose (v2)" (available in ComfyUI Manager) works, while the all-in-one model don't. For me anyway.
I paid for other tutorials, but I’m still confused about such basic tips, like the Ctrl + Shift + B key. Even though I’m Korean and not very good at English, I understood your tutorial better than ones in my own language. It’s like magic. Again, I just want to say THANK YOU 😊
58:45 If you do not see any options other than "tile" from the processor drop down list, you need to install another node first: [ jiboxemo 8 months ago AV_ControlNetPreProcessor shows no depth_midas on it. Only options are (none, tile, tile). Other than that, great work (edit) After installing Fannovel16 "Controlnet Auxiliary Preprocessors" (ID 6 in Manager) everything is ok (Edited) ] Thank that man for pointing this out. I got it fixed.
I had the same problem. Didn't see the anything other than tile first. After installed the aux, I can see other pre-processors. However, if I select one of the "pose/openpose/dwpose', I will get list index out of range error in this AV-controlnet-preprocessor node. Anyone knows how to fix?
@@michaellong5871 Don't know if you still need it but what I did is: delete node: "ControlNet Preprocessor" > double click > search: "AIO Aux Preprocessor" Use that like the node "ControlNet Preprocessor" in the tutorial (the names in the list are kinda not so user friendly though, but you can clearly choose "openpose..." or any other things there).
This is not only the best ComfyUI tutorial I've seen, and I've seen many! It is also the best tutorial on how to make a tutorial that really brings value to the viewer. Thank you very much and greetings from Bogota, Colombia.
In case anyone else runs into the same issue! When using controlnets, the preprocessor only showed "None" or "Tile" for the processor option. To fix this, install the custom node Fannovel16/comfyui_controlnet_aux. Then restart ComfyUI and you should have the options! I may have missed this part in the video or maybe it was left out but either way I hope this helps!
I was following the tutorial like walking on clouds. So good!! But then I too stumbled in this issue. First thing I did was to check on the replies and here we go Haha Thank you. It fixed it for me. The proper name in the search is ComfyUI's ControlNet Auxiliary Preprocessors, tho
@@d-rdm6166 seems to be a problem with the `comfyui-art-venture` library. this issue was newly reported as an issue on the Github repo. it was suggested to use a different preprocessor than openpose, but I still had issues. better to switch out the node with `AIO Aux Preprocessor` by Fannovel16 and select `OpenposePreprocessor`.
I will fall asleep to this every night! lol. I've faced so many dead-end workflows that just don't make sense. They’re overly complicated, and even the manager seem unable to fix what's outdated. It's frustrating when things just don't work. ComfyUI desperately needs an overhaul, but I don't have the answers either. Even after updating or fixing missing nodes, 95% of the time, it remains unusable, and I can’t figure out why. AI search, you’ve been a huge help in understanding how to fix these tangled messes-or sometimes, just knowing when to give up and try something new. Finally, I’m sitting down and really absorbing a tutorial, and I have to say, you're the first one who makes me feel like I can actually learn something. Without needing to jump between different videos, you cover so much in one concise video. Thank you! These are exactly the skills I want to master for my 3D modeling, 3D printing, and AI videos.
It took me a solid week to understand how things worked but it seems much more intimidating than it actually is. If you're not used to working with the terninal that part is a short lesson honestly
yes, open source stuff are a pain to install. hopefully this tutorial helps. i tried to minimize any use of the terminal. you can install most things using the manager
For a beginner also say that your videos are really very friendly, thank you very much. Because of my professional needs and the high learning threshold of flux, I've been using mimicpc to run comfyui before, it can load the workflow directly, I just want to download the flux model, and it handles the details wonderfully, but after watching your video, I'm using mimicpc to run comfyui again finally have a different experience, it's like I'm starting to get started! I feel like I'm starting to get the hang of it.
👍👍👍 You are the best tutor ❤! Even though I am a Chinese speaking old lady , there is no any problem for me to totally understand it, since you introduced it so clear and so detailed. Thank you very much!
I never watch videos more than 30 min. I rarely like videos. I almost never comment. I never subscribe after a single video. and here i just did all of those. This was incredible. Professional and packed full of information but not overwhelming. I simply had to comment for the algorithm. Ive been struggling to find good info on understanding these workflows and this was my answer. Thank you sir 🫡
Have not seen any other video with regards to comfyui but I am sure this is the most easy to follow video on youtube !! thanks for this awesome walkthrough. Being from a non tech background this took away all the doubts I had.
Yes, I think what you said is great. I am using mimicpc which can also achieve such effect. You can try it for free. In comparison, I think the use process of mimicpc is more streamlined and friendly.
I want to like this video thrice. Where's the button for that?! Not only is this very cleanly structured and comprehensive, but you have a beautiful way of repeating things you've used and explained earlier. This repetition increases the learning effect, and the art that you master beautifully is to throw it in so casually that it doesn't become boring, it only gives the sensation of "hey, I know this, I've seen this one earlier!". 👍👍👍 [EDIT:] I just noticed: at 49:49, the second upscaled dude has 2 mouths... 😂
Wow....just, wow! This is the PERFECT beginning tutorial. You have such a great speaking voice, as well. This is exactly what I've been looking for, and believe me, I've been looking everywhere. Very, very nicely done! In time, you might want to add something on grouping together specific nodes (which I gather from looking at others can be done.) Also, it may be worthwhile pointing out that to SOME extent, ComfyUI does a decent job of highlighting those node connections that are "candidates" for what you are trying to (eventually!) connect. That can be a big help when confronted with a new node that one (i.e., ME!) has absolutely no idea what do with when created. But those are really just minor points. I hope you continue to do a lot more of these, especially since everyone's going crazy about FLUX now, of course, not to mention that Loras are starting to slowly creep their way into Flux.
Such a good video. The thing is, i never watched a tutorial that will also get me hooked to know more. Everything explained properly instead of "put value here, do this, do that".
This is the best beginner tutorial for AI image generation, especially comfyUI, thanks a lot dear, you cleared my lots of confusion, great work thanks 😊
Hi, thanks for this video, it's the first time someone takes the time to explain every step with patient and clarity, really please make more videos like this. Thanks again. Greetings from Bogota, Colombia.
Wow, thanks bro! I didn't even need to use my super genius brain to understand this, you made it so easy to understand. All I need now is a better GPU.
Nice Tutorial, I learnt a lot thank you! Just a quick information that could help: after installing comfyui-art-venture, you need also to install ComfyUI Auxiliary Preprocessors,in order to see all the preprocessors and not haviing a list of None and Tile preprocessors only. I learnt this information from @LaCarnevali video. Thanks
For anyone who experiences problems with getting the nodes for auxiliary preprocessors. Make sure that the file path to your comfyui folder is relatively short (under 240 characters as I remember). I spent too much time reinstalling the extension until I realised this
I'm very comfortable with A1111 and have been avoiding Comfy and before this video would try it and put it down but this was excellent especially starting from scratch building a workflow and explaining the nodes. Good stuff.
Thank you very much, it is FANTASTIC! And there is something for your sponsor. I downloaded the app - I like it, really. But I would amend two things. 1) It shouldn't add space before and after the paragraph. 2) A drag-and-drop function from the side would be really useful.
Man, I would like to say your tutorials on ComfyUI and other AI Videos are amazing and so easy to follow just what I needed was the portable ComfyUI version so I could run it on my external hard drive keep up your great work I would love you to do a video on installing Mimic Motion.👌💯👍🏼😁
@@theAIsearch I moved this portable ComfyUI that I installed from your video from my C drive to my external hard drive but queue Prompt will not run the workflow just stays at DualCLIPLoader.
ComfyUI is the de-facto libre option of image generation. On their site they say they don't make any money right now (but plan to) and want to democratize AI tooling. It's extremely based.
An important clarification, an empty latent image is not filled with noise, but with "nothing". It is when entering the Ksampler that gausisan noise is added. Furthermore, the image is not completely filled with noise. Simply using solid images (Img2Img with denoise 1.0) of different colors with the same "seed" is enough to see the difference.
your explanation is absoulately amazing. please, make a video about flux comfyui, its seems complicated to me. Please, make a detail video of it. Take love and care.
Wooow man, what a cool guide! I might just give Comfy a try. Does it also allow you to generate less-sensitive images? I mean stuff like blood, violence, and all the things that most normal AI models don't have balls to produce?
Thank you so much for your amazing tutorial video. Your explanations were incredibly clear and easy to understand ComfyUI. I really appreciate how you broke everything down in a way that even complete beginners can follow along. This video has been incredibly helpful, and I'm truly grateful for the time and effort you put into creating such a valuable resource.
You are one of the best, this knowledge is blocked by a huge paywall in my country and you generously give it for free, thank you, may this become your good deeds till the eternal life Sir!
I'm saving this video and will use this when my Midjourney month runs out. 30 a month is a bit too steep for me. It's image quality may be high but it's so flawed, it's really not worth the money. Try making a picture of someone eating a banana, for example. On Midjourney it's not even peeled. The free Microsoft one it's peeled. Try to make 2 people in one photo, a very difficult task. Half the time they're the same person or sharing aspects from each. Waterskiing is a total mess - ropes all over the place. Better off learning a free AI image generator.
You haven't mentioned the main ultimate upscaler annoyance: when you run the same prompt for each tile, if it can hallucinate some jesus from a pancake, it will. You will have to double-check the upscaled image for characters in the shadows, double mouths (your upscale), nsfw body parts on each skin crease and bump, additional belly buttons, elongated apendages, etc, etc. Yeah it can be mitigated by making prompt generic or reducing denoise, but then it has no room to make new detail. One trick I do is run a segment detector (for example SAM or SAM2) to detect all things in image automatically (i.e, faces), and then use Detailer (SEGS) from impact pack to inpaint certain features with a prompt unique to them, for example, if it was hand, I would say "close-up shot of a hand".
Its more than enough for beginners... just questions 1. You didn't mentioned about how to edit Audios using Comfy, or its not works 2. For people who dont have PC for GPU how do they use on server i mean downloading and using nods 3. You need video for pro and experts Comfy Ai part 2-10 cuz i loved the way you teach even my Grandmother start working comfy tools 😮😮😅😮
Hi, great video. What model were you using for that image to image upscaling? I only found a lightning lora. It was taking forever trying to upscale using denoise method with the Flux models.
We lived in an era of local journal, Then independent hosted blogging came along, Now most people use fb and ig as thier blogs and vlogs, Companies will always find a way to insert themselves on our lives,
I've been so annoyed looking around for a ComfyUI tutorial where someone could just explain this to me in plain, simple English from scratch without being clueless in terms of how to teach and be straightforward... This, on the other hand, is the polar opposite... You are a GODSEND! This is exactly what I've been looking for, for like 4 days... this is so great! You're a really good teacher btw, thank you so, so much for this. I'm finally understanding Comfy, and it's not hard now that I have found someone who knows how to communicate... Thanks for this 💚
You're welcome. Glad I could help!
So many tutorials. And none of them give even a brief explanation of what a VAE is, clips, loras, etc.
Yes. bad teachers = no understanding of students
This video probably won't do as well as your others, but for the love of god, please consider doing further tutorials on comfyUI. This is the first tutorial that is understandable and comfortable to follow.
Further Ideas: Flux specific tutorials, Inpainting, character consistency.
Thanks!
@@theAIsearch I second this!
I third this! Love your teaching style. Flux + Comfy would be amazing!
Yes please! These are great.
LORAs, controlnet, consistent characters. 😂😅
I have watched many comfyUI tutorials. Some of them are good, but this is definitely the most comprehensive and easy to follow tutorial I have seen so far. 👍
Thanks!
But the video is 1 hour??!
I really appreciate that you don't just say "do this, click that, put this value in" you actually explain what each parameter does, and how adjusting the value up or down will affect the outcome. Thank you very much!
I also agree with many others on here that you should upload tutorials on any other comfyui/Ai topics you think would be beneficial for us noobs! Thanks again
Thanks!
I have seriously being waiting to learn ComfyUI this tutorial came just in time
hope this helps!
So do i, thank you sir!
I have like a hundred tabs open on multiple windows and this video was one of them. I was already badly disappointed by various other crash courses and tutorials on RUclips about ComfyUI but still mustered up the courage to watch this and I got so hooked within the first few minutes - its just incredible. In fact for a second I switched to another tab to answer an email but then I lost which tab this video was and I was so pissed but luckily I found out this video again.
Your style of explaining is so great! you should do more of these.
Thanks a lot for this tutorial!
For those who may have the same problem : I had a hard time with openpose model, even with the exact same parameters as shown in the video. I found that using the model ControlNet model "SDXL-controlnet: OpenPose (v2)" (available in ComfyUI Manager) works, while the all-in-one model don't. For me anyway.
Yeah
I paid for other tutorials, but I’m still confused about such basic tips, like the Ctrl + Shift + B key. Even though I’m Korean and not very good at English, I understood your tutorial better than ones in my own language. It’s like magic.
Again, I just want to say THANK YOU 😊
You are welcome! Hope this helps
Wish I could give more but this is my first step in learning Gen AI and your tutorial set the bar high, I learned so much. Thank you!
Thank you so much!
58:45 If you do not see any options other than "tile" from the processor drop down list, you need to install another node first:
[
jiboxemo
8 months ago
AV_ControlNetPreProcessor shows no depth_midas on it. Only options are (none, tile, tile).
Other than that, great work
(edit) After installing Fannovel16 "Controlnet Auxiliary Preprocessors" (ID 6 in Manager) everything is ok
(Edited)
]
Thank that man for pointing this out. I got it fixed.
Thanks man, you saved me a lot of time!
I had the same problem. Didn't see the anything other than tile first. After installed the aux, I can see other pre-processors. However, if I select one of the "pose/openpose/dwpose', I will get list index out of range error in this AV-controlnet-preprocessor node. Anyone knows how to fix?
@@michaellong5871 Same issue can't find any fix any where on the whole internet.
@@michaellong5871 Don't know if you still need it but what I did is:
delete node: "ControlNet Preprocessor" > double click > search: "AIO Aux Preprocessor"
Use that like the node "ControlNet Preprocessor" in the tutorial (the names in the list are kinda not so user friendly though, but you can clearly choose "openpose..." or any other things there).
Thanks man
Great video. Only notice that Ultimate Upscaler image (50:53) looks like have an extra lip.
This is not only the best ComfyUI tutorial I've seen, and I've seen many! It is also the best tutorial on how to make a tutorial that really brings value to the viewer. Thank you very much and greetings from Bogota, Colombia.
Finally a clear and easy to understand explanation of ComfyUI that starts from scratch! I've been looking for a tutorial like this for a long time.
In case anyone else runs into the same issue! When using controlnets, the preprocessor only showed "None" or "Tile" for the processor option. To fix this, install the custom node Fannovel16/comfyui_controlnet_aux. Then restart ComfyUI and you should have the options! I may have missed this part in the video or maybe it was left out but either way I hope this helps!
Thanks for sharing!
I was following the tutorial like walking on clouds. So good!! But then I too stumbled in this issue. First thing I did was to check on the replies and here we go Haha Thank you. It fixed it for me. The proper name in the search is ComfyUI's ControlNet Auxiliary Preprocessors, tho
hello ,thank you for your help I tried this one and it worked however now it gives me new error 'list index out of range' do you know how to fix this?
@@d-rdm6166 seems to be a problem with the `comfyui-art-venture` library. this issue was newly reported as an issue on the Github repo. it was suggested to use a different preprocessor than openpose, but I still had issues. better to switch out the node with `AIO Aux Preprocessor` by Fannovel16 and select `OpenposePreprocessor`.
I also encountered this issue, thanks for doing the leg work of figuring out how to fix it.
I will fall asleep to this every night! lol.
I've faced so many dead-end workflows that just don't make sense. They’re overly complicated, and even the manager seem unable to fix what's outdated. It's frustrating when things just don't work.
ComfyUI desperately needs an overhaul, but I don't have the answers either. Even after updating or fixing missing nodes, 95% of the time, it remains unusable, and I can’t figure out why. AI search, you’ve been a huge help in understanding how to fix these tangled messes-or sometimes, just knowing when to give up and try something new.
Finally, I’m sitting down and really absorbing a tutorial, and I have to say, you're the first one who makes me feel like I can actually learn something. Without needing to jump between different videos, you cover so much in one concise video. Thank you! These are exactly the skills I want to master for my 3D modeling, 3D printing, and AI videos.
Thanks & good luck!
It took me a solid week to understand how things worked but it seems much more intimidating than it actually is. If you're not used to working with the terninal that part is a short lesson honestly
yes, open source stuff are a pain to install. hopefully this tutorial helps. i tried to minimize any use of the terminal. you can install most things using the manager
@@theAIsearch you did a good job honestly I could have saved sometime if I had this tutorial a few weeks ago.
For a beginner also say that your videos are really very friendly, thank you very much. Because of my professional needs and the high learning threshold of flux, I've been using mimicpc to run comfyui before, it can load the workflow directly, I just want to download the flux model, and it handles the details wonderfully, but after watching your video, I'm using mimicpc to run comfyui again finally have a different experience, it's like I'm starting to get started! I feel like I'm starting to get the hang of it.
👍👍👍 You are the best tutor ❤! Even though I am a Chinese speaking old lady , there is no any problem for me to totally understand it, since you introduced it so clear and so detailed. Thank you very much!
Thank you! 😃
Are you kidding me right now??? I literally downloaded comfyui a day ago! I was about to search for a tutorial! Thanks! Love your videos.
hope this helps!
@@theAIsearch your videos always help! Thank you!
Comprehensive, easy to follow and well made, this is excellent and what sets you apart from the others. Thank you for making this tutorial
You're very welcome!
WE GONNA SEE THIS TUTORIAL HITS MILLIONS OF VIEWS
😃😃😃
I never watch videos more than 30 min. I rarely like videos. I almost never comment. I never subscribe after a single video. and here i just did all of those. This was incredible. Professional and packed full of information but not overwhelming. I simply had to comment for the algorithm. Ive been struggling to find good info on understanding these workflows and this was my answer. Thank you sir 🫡
You're very welcome!
Have not seen any other video with regards to comfyui but I am sure this is the most easy to follow video on youtube !! thanks for this awesome walkthrough. Being from a non tech background this took away all the doubts I had.
Wow, over an hour! Thank you for such a detailed guide, comfyui is definitely daunting!
hope you enjoy!
Yes, I think what you said is great. I am using mimicpc which can also achieve such effect. You can try it for free. In comparison, I think the use process of mimicpc is more streamlined and friendly.
Wow - so timely, so comprehensive, and so clearly explained!! ❤
Thanks!
I want to like this video thrice. Where's the button for that?! Not only is this very cleanly structured and comprehensive, but you have a beautiful way of repeating things you've used and explained earlier. This repetition increases the learning effect, and the art that you master beautifully is to throw it in so casually that it doesn't become boring, it only gives the sensation of "hey, I know this, I've seen this one earlier!". 👍👍👍 [EDIT:] I just noticed: at 49:49, the second upscaled dude has 2 mouths... 😂
Thank you!
Thank you so much . This tutorial was the best I've ever seen on the RUclips platform about comfyui .
After searching for a good tutorial video explaining how to use ComfyUI, this was the best by far.
Wow....just, wow! This is the PERFECT beginning tutorial. You have such a great speaking voice, as well. This is exactly what I've been looking for, and believe me, I've been looking everywhere. Very, very nicely done! In time, you might want to add something on grouping together specific nodes (which I gather from looking at others can be done.) Also, it may be worthwhile pointing out that to SOME extent, ComfyUI does a decent job of highlighting those node connections that are "candidates" for what you are trying to (eventually!) connect. That can be a big help when confronted with a new node that one (i.e., ME!) has absolutely no idea what do with when created.
But those are really just minor points. I hope you continue to do a lot more of these, especially since everyone's going crazy about FLUX now, of course, not to mention that Loras are starting to slowly creep their way into Flux.
Thanks for sharing! I'll do a video on Flux once the ecosystem is more mature.
Such a good video. The thing is, i never watched a tutorial that will also get me hooked to know more. Everything explained properly instead of "put value here, do this, do that".
This is the best beginner tutorial for AI image generation, especially comfyUI, thanks a lot dear, you cleared my lots of confusion, great work thanks 😊
Glad it helped!
Thanks man for all that hard work, that's Arguably one of the most useful things Ive watched in a long time. Keep it up man
Thanks
I love it, if education would be like this everywhere, we would now be playing golf on Mars :)
At 50:14, the second image is so super detailed, it comes with double lips.
Hi, thanks for this video, it's the first time someone takes the time to explain every step with patient and clarity, really please make more videos like this. Thanks again. Greetings from Bogota, Colombia.
you're welcome!
Appreciate all your content! The way you explain things make it super easy to follow
The best explanation of ComfyUi I've ever seen.
Wow, thanks bro! I didn't even need to use my super genius brain to understand this, you made it so easy to understand. All I need now is a better GPU.
Nice Tutorial, I learnt a lot thank you! Just a quick information that could help: after installing comfyui-art-venture, you need also to install ComfyUI Auxiliary Preprocessors,in order to see all the preprocessors and not haviing a list of None and Tile preprocessors only. I learnt this information from @LaCarnevali video. Thanks
helpful for me, thinks
Thanks for sharing!
For anyone who experiences problems with getting the nodes for auxiliary preprocessors. Make sure that the file path to your comfyui folder is relatively short (under 240 characters as I remember). I spent too much time reinstalling the extension until I realised this
Thanks for sharing!
Thank you for the tutorial. I learned so much within an hour. I finally have a grasp of how to use comfyUI
This is the exact tyle of tutorial I've been waiting for. PLEASE DO MORE!
Great tutorial. Thanks. Looking forward to seeing more on ComfyUI.
I'm very comfortable with A1111 and have been avoiding Comfy and before this video would try it and put it down but this was excellent especially starting from scratch building a workflow and explaining the nodes. Good stuff.
Good luck!
Great tutorial! Btw: The Text To Console node helps you to inspect the prompts without having to resort to files...
wow as someone said you are really GOD send to all of us. Especially for me you are Real Hero. Thanks a lot my friend. How many of you agree.
Wow, thanks
Thank you very much, it is FANTASTIC! And there is something for your sponsor. I downloaded the app - I like it, really. But I would amend two things. 1) It shouldn't add space before and after the paragraph. 2) A drag-and-drop function from the side would be really useful.
Thanks! Forwarding to @turbotypeapp
Man, I would like to say your tutorials on ComfyUI and other AI Videos are amazing and so easy to follow just what I needed was the portable ComfyUI version so I could run it on my external hard drive keep up your great work I would love you to do a video on installing Mimic Motion.👌💯👍🏼😁
Thank you!
@@theAIsearch I moved this portable ComfyUI that I installed from your video from my C drive to my external hard drive but queue Prompt will not run the workflow just stays at DualCLIPLoader.
This made everything simple and easy to understand. Thanks and will be looking to learn more about flux as well
You are welcome!
Thank you so much. You are the best at explaining things so a newbie like myself can understand :)
ComfyUI is the de-facto libre option of image generation. On their site they say they don't make any money right now (but plan to) and want to democratize AI tooling. It's extremely based.
An important clarification, an empty latent image is not filled with noise, but with "nothing". It is when entering the Ksampler that gausisan noise is added.
Furthermore, the image is not completely filled with noise. Simply using solid images (Img2Img with denoise 1.0) of different colors with the same "seed" is enough to see the difference.
For my AMD GPU friends -- the Zluda version of comfyui works fantastic for me. If you have any AMD GPU it's worth a shot!
thanks for sharing!
Thank you for your work, now I understand ComfyUI and I would be able to play with it 👍
Glad to hear that!
Bro!
You are the king of ConfyUI
Thanks for the tutorial
🙏
No problem 👍
your explanation is absoulately amazing. please, make a video about flux comfyui, its seems complicated to me. Please, make a detail video of it. Take love and care.
Wooow man, what a cool guide! I might just give Comfy a try. Does it also allow you to generate less-sensitive images? I mean stuff like blood, violence, and all the things that most normal AI models don't have balls to produce?
yes, 100% uncensored
Thank you so much for your amazing tutorial video. Your explanations were incredibly clear and easy to understand ComfyUI. I really appreciate how you broke everything down in a way that even complete beginners can follow along. This video has been incredibly helpful, and I'm truly grateful for the time and effort you put into creating such a valuable resource.
You are one of the best, this knowledge is blocked by a huge paywall in my country and you generously give it for free, thank you, may this become your good deeds till the eternal life Sir!
Thanks!
aitutorialmaker AI fixes this. ComfyUI full installation tutorial 2024.
Man you are a legend!
Thank you so much for the detailed explanation
You're welcome!
Excellent tutorial, this would have been a paid vid. claps for u man.!!
This video is indeed very very helpful. Thanks a lot for your time and effort.
You're welcome
Amazing explanation! Thank you we appreciate your time and effort 👌
Thanks!
so cool tutorial as I have worked on cinema4d and blender, nodes are so good manage
This was absolutely superb. Thank you!
You're very welcome!
Hello, one of the greatest tutorials on ComfyUI. Please, can you do one ComfyUI flux 1?
Thanks. I'll make one soon!
I'm saving this video and will use this when my Midjourney month runs out. 30 a month is a bit too steep for me. It's image quality may be high but it's so flawed, it's really not worth the money. Try making a picture of someone eating a banana, for example. On Midjourney it's not even peeled. The free Microsoft one it's peeled. Try to make 2 people in one photo, a very difficult task. Half the time they're the same person or sharing aspects from each. Waterskiing is a total mess - ropes all over the place. Better off learning a free AI image generator.
You haven't mentioned the main ultimate upscaler annoyance: when you run the same prompt for each tile, if it can hallucinate some jesus from a pancake, it will. You will have to double-check the upscaled image for characters in the shadows, double mouths (your upscale), nsfw body parts on each skin crease and bump, additional belly buttons, elongated apendages, etc, etc. Yeah it can be mitigated by making prompt generic or reducing denoise, but then it has no room to make new detail.
One trick I do is run a segment detector (for example SAM or SAM2) to detect all things in image automatically (i.e, faces), and then use Detailer (SEGS) from impact pack to inpaint certain features with a prompt unique to them, for example, if it was hand, I would say "close-up shot of a hand".
Great!! Could you plz give more details about this part. I'm struggle with this problem! Videos or workflow file will be great!
Sir, can you teach how to use Flux Checkpoint in ComfyUI along with the explanation of what each node do? Thank you before.
thanks very much, this video help me to start with comfyui a lot.
내가 본 튜토리얼중 최고에요.
This is finally gonna let me do an idea i had for years with AI
Thanks a lot.
You're welcome!
Dell and Nvidia sponsored you? That's amazing, m8. It's not like you have millions of subs (yet), so yeah.. amazing :) good for you
you are an amazing teacher. Thank you!
You're welcome!
I've learnt a lot through this tutorial..TQ..👍👍
You're welcome!
TurboType helps you type faster with keyboard shortcuts. Use it for FREE:
www.turbotype.app/
thanks for the tutorial! very clearly explained!
Awesome video! Is it possible to run ComfyUI on Mac? Also, if you're running on PC CPU what are the requirements? PC GPU requirements?
Its more than enough for beginners... just questions 1. You didn't mentioned about how to edit Audios using Comfy, or its not works 2. For people who dont have PC for GPU how do they use on server i mean downloading and using nods 3. You need video for pro and experts Comfy Ai part 2-10 cuz i loved the way you teach even my Grandmother start working comfy tools 😮😮😅😮
Thanks, bro, really great work. You can make more ComfyUI tuts, cause its very usefull
Thanks!
Everything is great as long as you have a good GPU.
Sad, what are the specs of your GPU??
Hi, great video. What model were you using for that image to image upscaling? I only found a lightning lora. It was taking forever trying to upscale using denoise method with the Flux models.
Exactly what I needed, thank you so much
If you run it through another ksampler and denoise at .5, that first upscale is actually decent. But thanks for all the tips this is a helpful video
Thanks for sharing!
dope. excellent tutorial. thank you!
This channel is a gold mine 🥇
Thanks!
The Best. Thank You.🤩
you're welcome!
Such a life saver man 👏👏 Got yourself a new sub
Excellent video. Thank you so much! I am getting an error when using control net though, have you heard of this?
in manager > custom nodes manager, try also installing ComfyUI's ControlNet Auxiliary Preprocessors
On device open source AI is the future, not those owned by corporations and governments
We lived in an era of local journal, Then independent hosted blogging came along, Now most people use fb and ig as thier blogs and vlogs, Companies will always find a way to insert themselves on our lives,
right as all good things come from god any way amen
that was a very perfect guide , thanks alot , if you can please teach "Webui Forge" as well i heard it can be better that comfyui sometimes ❤
fantastic video! thanks for sharing!
50:17 was hoping you would explain the cause of his double mouth. What happened there and how to fix.
you make the installation look easy. But learning this is going to be a long hard slog.
Well explained! THank you!
You're welcome!
Yes please, make a FLUX tutorial!
Instead of selecting all the nodes to mute them 1:00:45, simply select the Load Checkpoint node and mute that.
This was amazing. Thanks !! But why does tile upscaling sampling node (47:00) not connect the output to a VAE decoder??
Thank you for your time 💪
No problem 👍