glad you noticed, it can be improved a bit, but works great! I try to make a native tiled upscalers if i can, like with Kolors. I know most people use SUPIR or another method, but those all used previous SDXL or SD1.5 finetunes, so i want to get the "pure Flux" outputs :D Thanks to you too!
Been using Flux Dev pretty much as soon as it was supported in Comfy and I love it, feels like what SD3 should have been and I can now create people wearing band T-Shirts with real names on them.
This workflows pack is the best so far. But to understand every changes, I have to try from V1 to V11 - for me it's okay since it is good to learn what happened in every iteration.
@@bakablitz6591 afaik you need serious VRAM, so probably cloud only and looks like large datasets for full model training, but time will tell as people succeed in creating finetunes,,
@@FiveBelowFiveUK yeah thats fair, perhaps if model merging becomes possible people will train loras and merge them into flux for less resource consumption?, as for the legistics of it im not too smart in this field, i just hope there are workarounds!
2 месяца назад+2
FLux Schnelle & Dev models (both fp16 & fp8) work on my RTX 3060 (12GB).
it is me, however i made myself into the puppet with comfy. lot's of loops which i cut to my speech as best i can, we have some videos on how you can do this and render any character over the top too ! Rotomaker and mimicmotion videos explain more Thanks so much :) I'm proud to be "that white pose guy" :D
in V4 pack we will see the introduction of the "Foda-Bridge-basic" workflow to the collection. This is already available on the discord in the #inside-line sections, as an "early-edition" workflow - these are subject to change when they are released, but are essentially what is coming up next :) The "Bridge" variant workflows aim to supply images to FLUX using a built in SDXL+LORA stack generation section. This give you the power of Flux, with all your favourite SDXL models, prompts and loras. Also training will be investigated, but requires 20GB of Vram for training FLUX Lora at rank 16 - even with quantized training. As for inpainting, i reckon this is possible, but often you can just use the image from FLUX with an SDXL inpainting workflow (as the inpaint part is usually small)
i get this even tho i have KJNodes installed. any idea? "When loading the graph, the following node types were not found: easy getNode easy setNode Nodes that have failed to load will show as red on the graph."
I added "no EasyUse" version to the latest pack to solve this problem for everyone affected. Some kind of conflict it seems, it was easier to just rebuild with connecting "lines" so that is what i have done to address this.
no. . At least I haven't seen it and I've been working with it the past 24 hours. It doesnt even have cfg and basically one sampler. But.. who needs all that when it's got great prompt adherance? ;-)
not yet, but as goodie says, at lower denoise with higher model shift the prompt is followed so nicely that img2img can give you a lot of what controlnet was doing. also in V4 I will release the Bridge version, allowing you do pipe SDXL+LORA generations into Flux, that will allow all the controlnets on the SDXL side, for now that will be the best we get until something new drops :)
in V2 i added FLUX Sampler Garden workflow to the pack, this allows easy testing of all the samplers, aside from the defaults, i like the Euler & dpmpp_Alt and yes - the prompt adherence is beyond award winning. it's a great thing to see after SD3 kinda fell over.
Thank you for making the tutorial video. This is my first time installing ComfyUI to test Flux1. I followed all the instructions and installed everything, but when I click on 'run_cpu.bat' (or 'nvidia'), sometimes the ComfyUI screen appears, and sometimes it doesn't. When it doesn't appear, the last command line shown is as below, and when I press any key, it just disappears and nothing happens. I've tried many times but still can't open it. I don't know what this error is. Could you please guide me on how to fix it? Thank you very much. D:\ComfyUI\ComfyUI_windows_portable>pause Press any key to continue..."
ok well, the run_cpu does not use your GPU, so most likely you should ignore that. if you have problems with your comfyui portable install and if it has never even worked yet, save yourself the headache, delete and start with a fresh portable download. provided your internet does not crap out on you halfway through, it will simply install itself now. it sounds like something has gone wrong with the installation. Easier to start fresh is my advice
oh yes! i am still looking into this at the moment but here are the places to start (comfyui has already added loading FLUX lora support, looking at the commits) SIMPLETUNER: github.com/bghira/SimpleTuner/blob/main/documentation/quickstart/FLUX.md - Quantised model training (quote): Tested on Apple and NVIDIA systems, Hugging Face Optimum-Quanto can be used to reduce the precision and VRAM requirements, training Flux on just 20GB. I have not had the time to check this out, but seems this is the move.
Thanks for keeping us updated on the latest 🤘
You bet!
Thank you very much. This upscaling method was a surprise. Looking forward to your next video!
glad you noticed, it can be improved a bit, but works great! I try to make a native tiled upscalers if i can, like with Kolors. I know most people use SUPIR or another method, but those all used previous SDXL or SD1.5 finetunes, so i want to get the "pure Flux" outputs :D Thanks to you too!
Been using Flux Dev pretty much as soon as it was supported in Comfy and I love it, feels like what SD3 should have been and I can now create people wearing band T-Shirts with real names on them.
This.
I am Running in RTX3060 6GB Laptop....Working Fine .. :) **************** Big Salute to black forest
What flux version are you running?
@@AhmadAli-xv4vd schnell
on a 6GB laptop !! wowow
thanks for letting everyone know your result
@@FiveBelowFiveUK Thanks to you as well for creating video for this :)
This workflows pack is the best so far. But to understand every changes, I have to try from V1 to V11 - for me it's okay since it is good to learn what happened in every iteration.
it's kinda like a crash course of a kind if you run through it in order, since V9 we are adding more advanced stuffs now :)
this model is amazing! feels like an open source dalle3 with better prompt following. i can only imagine how good finetunes will be
SIMPLETUNER - allow quantized training of FLUX.1 lora at rank 16 (20GB VRAM)
so i hope to look into this, time allowing ;)
@@FiveBelowFiveUK very cool!, is there any reasonably useful advancements in full model tuning?
@@bakablitz6591 afaik you need serious VRAM, so probably cloud only and looks like large datasets for full model training, but time will tell as people succeed in creating finetunes,,
@@FiveBelowFiveUK yeah thats fair, perhaps if model merging becomes possible people will train loras and merge them into flux for less resource consumption?, as for the legistics of it im not too smart in this field, i just hope there are workarounds!
FLux Schnelle & Dev models (both fp16 & fp8) work on my RTX 3060 (12GB).
thanks so much for leaving this comment - it will help a lot of people who are unsure if they can run it - Salutations !
The openpose avatar is so fucking weird man i love it. Are you actually narrating these while stood up?
No he has several puppets like Team America: World Police. On his second channel he does dramatic reenactments of famous Mr Beast videos.
@@sprinteroptions9490 wow, incredible. He's so talented
it is me, however i made myself into the puppet with comfy. lot's of loops which i cut to my speech as best i can, we have some videos on how you can do this and render any character over the top too ! Rotomaker and mimicmotion videos explain more
Thanks so much :) I'm proud to be "that white pose guy" :D
oh no, you exposed my other channel ! :)
Great video. Any thoughts on inpainting and Loras?
in V4 pack we will see the introduction of the "Foda-Bridge-basic" workflow to the collection. This is already available on the discord in the #inside-line sections, as an "early-edition" workflow - these are subject to change when they are released, but are essentially what is coming up next :) The "Bridge" variant workflows aim to supply images to FLUX using a built in SDXL+LORA stack generation section. This give you the power of Flux, with all your favourite SDXL models, prompts and loras.
Also training will be investigated, but requires 20GB of Vram for training FLUX Lora at rank 16 - even with quantized training.
As for inpainting, i reckon this is possible, but often you can just use the image from FLUX with an SDXL inpainting workflow (as the inpaint part is usually small)
@@FiveBelowFiveUK Oh, that's great news! I didn't realize there was a Discord server although I should have guessed. Where can I find the invite?
@@ThoughtFission you can find the discord invite in the workflow notes and in the description for this video :)
@@FiveBelowFiveUK THank you, thank you, thank you!
i get this even tho i have KJNodes installed. any idea?
"When loading the graph, the following node types were not found:
easy getNode
easy setNode
Nodes that have failed to load will show as red on the graph."
Yes! Same for me. Can't find the right Custom Node for "Easy Set/Get".
The same problem! All previous workflows with Koda Pack now also do not work
I added "no EasyUse" version to the latest pack to solve this problem for everyone affected. Some kind of conflict it seems, it was easier to just rebuild with connecting "lines" so that is what i have done to address this.
updated packs include version without EasyUse nodes :D
next pack solved problem by adding versions without EasyUse nodes.
Didn't watch the whole video because I'm on a break from work. Does it work with controlnet?
no. . At least I haven't seen it and I've been working with it the past 24 hours. It doesnt even have cfg and basically one sampler. But.. who needs all that when it's got great prompt adherance? ;-)
not yet, but as goodie says, at lower denoise with higher model shift the prompt is followed so nicely that img2img can give you a lot of what controlnet was doing.
also in V4 I will release the Bridge version, allowing you do pipe SDXL+LORA generations into Flux, that will allow all the controlnets on the SDXL side, for now that will be the best we get until something new drops :)
in V2 i added FLUX Sampler Garden workflow to the pack, this allows easy testing of all the samplers, aside from the defaults, i like the Euler & dpmpp_Alt
and yes - the prompt adherence is beyond award winning. it's a great thing to see after SD3 kinda fell over.
Thank you for making the tutorial video. This is my first time installing ComfyUI to test Flux1. I followed all the instructions and installed everything, but when I click on 'run_cpu.bat' (or 'nvidia'), sometimes the ComfyUI screen appears, and sometimes it doesn't. When it doesn't appear, the last command line shown is as below, and when I press any key, it just disappears and nothing happens. I've tried many times but still can't open it. I don't know what this error is. Could you please guide me on how to fix it? Thank you very much.
D:\ComfyUI\ComfyUI_windows_portable>pause
Press any key to continue..."
ok well, the run_cpu does not use your GPU, so most likely you should ignore that.
if you have problems with your comfyui portable install and if it has never even worked yet, save yourself the headache, delete and start with a fresh portable download. provided your internet does not crap out on you halfway through, it will simply install itself now. it sounds like something has gone wrong with the installation. Easier to start fresh is my advice
Any likelihood of affordable GPU LoRA training with flux dev?
oh yes! i am still looking into this at the moment but here are the places to start
(comfyui has already added loading FLUX lora support, looking at the commits)
SIMPLETUNER: github.com/bghira/SimpleTuner/blob/main/documentation/quickstart/FLUX.md
- Quantised model training (quote):
Tested on Apple and NVIDIA systems, Hugging Face Optimum-Quanto can be used to reduce the precision and VRAM requirements, training Flux on just 20GB.
I have not had the time to check this out, but seems this is the move.
8g rtx4060 is running dev 1 without any problem
thx, it really helps people out here looking for those confirmations! thanks again
is gonna work on 8 GB VRAM ?
according to others in the comments, we have seen people with 3060 8GB (laptop) and 3060 12GB so far, using the model without problems.