Thank you! No problem. In these kinds of setups theres usually 2 clips. The main one is a positive prompt, which tries to tell the image gen what to create, the second is a negative prompt, it tries to tell the image gen what NOT to create, but depending on the weights this may or may not have much effect. In flux I think the negative prompt has minimal effect.
@@mhfxFrom what I've learned for that particular node type, the clip_l text area (the top) should be more scene/background prompts using comma separated values, where the t5 text area should be natural language prompts that focus more on the character. Latent Vision's deep dive on Flux taught me a lot.
I am learning ComfyUI and Flux and was watching your video here to learn more about ControlNet. One thing I thought of right away is how much I *HATE* the ClipTextEncodeFlux node. WTF we have to enter the same text twice?? Then it occurred to me I can convert both boxes to inputs and use a text box to feed both.
💯💯yes you can absolutely do that and i reccommend keeping them the same for simplicity, but its also interesting to try different things in each clip, it gives you pretty varied results! Give it a try
ah gotcha, it may take some time to get used to and really depends on the subject. Also even though I released this video a few days after xlabs announced their lora and controlnets, and the technique should still be exactly the same, xlabs has already updated them so some of the values may have changed. I will add an update to their latest changes to my list of upcoming videos
Great question! For those others who may also be interested -- the NF4 version is a speed optimized version of flux1 that is still considered experimental, to install it, you can change your comfyui channel to "dev" instead of "default" and search for nf4 in the comfyui custom nodes manager. It uses the exact same setup as the standard flux1 version from this video, except it requires a different node to load the checkpoint. To answer your question though, they do note in the manager that the current nf4 version is incompatible with loras
More than 48 Minutes for 1 image. FLUX the AI image model to the 1% of hardware users in the world. 8 gig VRam is just not enough. Another controlNet try after 30 minutes it only reached 8%, I stopped it. I will use FLUX online where I can.
@@mhfx For just creating images it is not bad. Between 2 to 15 minutes or more depending on the model, but anything more advanced like Loras and ControlNet I will stick with SDXL unless something changes. For me using NF4 & GGUF models has worked the best. Schnell works good but faces look like plastic to me.
Great channel, thanks for your detailed Infos on Flux. I was very confused on the different clips.
Thank you! No problem. In these kinds of setups theres usually 2 clips. The main one is a positive prompt, which tries to tell the image gen what to create, the second is a negative prompt, it tries to tell the image gen what NOT to create, but depending on the weights this may or may not have much effect. In flux I think the negative prompt has minimal effect.
@@mhfxFrom what I've learned for that particular node type, the clip_l text area (the top) should be more scene/background prompts using comma separated values, where the t5 text area should be natural language prompts that focus more on the character.
Latent Vision's deep dive on Flux taught me a lot.
Nice tuto bro
thx man!
I am learning ComfyUI and Flux and was watching your video here to learn more about ControlNet. One thing I thought of right away is how much I *HATE* the ClipTextEncodeFlux node. WTF we have to enter the same text twice?? Then it occurred to me I can convert both boxes to inputs and use a text box to feed both.
💯💯yes you can absolutely do that and i reccommend keeping them the same for simplicity, but its also interesting to try different things in each clip, it gives you pretty varied results! Give it a try
Can you please show canny example? My tests are showing bad results with same flow((
ah gotcha, it may take some time to get used to and really depends on the subject. Also even though I released this video a few days after xlabs announced their lora and controlnets, and the technique should still be exactly the same, xlabs has already updated them so some of the values may have changed. I will add an update to their latest changes to my list of upcoming videos
Thumb up for you
Thank you friend!
LETS GOOOOOOOOOOOOOOO!
Thank you!!!!
yoooo MIke...i just subbed as well...on my ai channel and my gamer one.
Thank you! I've subbed back, I appreciate the support!
Do you know if loras work with nf4?
Great question! For those others who may also be interested -- the NF4 version is a speed optimized version of flux1 that is still considered experimental, to install it, you can change your comfyui channel to "dev" instead of "default" and search for nf4 in the comfyui custom nodes manager. It uses the exact same setup as the standard flux1 version from this video, except it requires a different node to load the checkpoint. To answer your question though, they do note in the manager that the current nf4 version is incompatible with loras
More than 48 Minutes for 1 image. FLUX the AI image model to the 1% of hardware users in the world. 8 gig VRam is just not enough. Another controlNet try after 30 minutes it only reached 8%, I stopped it. I will use FLUX online where I can.
👍 RTX 4060 not good enough for controlnet or ip adapter
You can try the flux --- schnell fp8 checkpoint model which is supposed to be less ram intense, and lower the image resolution
@@mhfx For just creating images it is not bad. Between 2 to 15 minutes or more depending on the model, but anything more advanced like Loras and ControlNet I will stick with SDXL unless something changes. For me using NF4 & GGUF models has worked the best. Schnell works good but faces look like plastic to me.
Niceeee -- I'm glad you found something that works for you ✌️👍
I dont think thats what the fury lora is for. Lol.
😳😳😂😂