I run a i7 Q7700K bios overclocked. 64 gigs of ram and a 4080 12 gig Nvidia and generate in under a minuet with your settings. Thanks a ton Jockerai...
Thanks a ton! 🙌 I really appreciate your kind words. It means a lot coming from someone who knows the effort that goes into creating content. Glad you enjoyed the video, and I’ll keep working hard to bring more valuable stuff. Cheers, mate!
I am sometimes very skeptical about these things, but this worked! 1 gen at an extremely high quality took 32 seconds on my Nvidia 3060 12 gigs RAM, bro thank you so much! This is crazy!
@@Jockerai can you make a tutorial with this formula including controlnet? Tried it yesterday but i get an error due to out of memory hahah Keep doing the awesome work!
My god, bro. You are my saver! Thank you for this lifehack, now I got amazing results only in 12 seconds on my RTX 4080 Super in Forge with large 23Gb model. I'm really happy!
I tried this and I have to say what a revelation! I use a regular prompt of my own devising with a specific seed for testing and comparison. At 20 steps it produces vastly superior lighting, shadows, and details of landscape, for a second or so more processing time. The skin tones are much better detailed. At 8 steps it was a lot faster of course, but mildly disappointing. Nothing wrong with the result or any errors, just a little flat. But a compromise of 12 steps gave a better result than 8 with, and slightly better than 20 without the new Lora. Test and iterate at 8, again at 12, and produce at 20 for a great result. Thanks for the heads-up man. I love it.
Thanks for sharing your experience! I appreciate it, but I have to disagree. The images I've generated with Lora at 8 steps, especially with weight type set to default, turned out amazing-both in depth and quality. And honestly, the difference in processing time between 20 steps and 8 steps is definitely more than just a second. I mentioned that in the video, my friend!
@Jockerai My testing was without any other changes than the number of steps. Weighting and other tweaks will always change the end results. From what I tried, the speed increase was in line with your results. 12 iterations with no other variations gave a better result, and 20 is massively better. The LoRa gives beautiful results with minimal effort and having the choice of anything between 8 and 20 steps, with equal and better results, gives people a hell of a lot of options. Changing weighting refines to potentially even better output, as is normal.
Sure that can work. Problem is when you wanna use other LoRAs on top of the Turbo LoRA. Then you run into problems, since you cant have too many LoRAs active at the same time and excepting great results.
Yea, its cause HYPER has Hyper Lora inside it, which is ByteDance creation, if you want that, you can simply add it as LORA, its on their huggin site. Thing is, it has fairly detrimental impact on image quality. IMHO, all their HYPERs do. Q8 GGUF, well Q8 is simply reduced quality INSIDE that model. Imagine it like JPEG compression and you get idea how it works. Another issue with GGUF is that its "dumb" compression, so reason why its slow is that it basically needs to be unpacked (or unzipped :D) to use it, which costs some extra HW processing time. Solution outside this? Well, NF4 obviously. NF4 v2 to be precise. Only downside is that on ComfyUI to use it with LORA, you need a metric ton of VRAM as it needs to load everything into VRAM. Could be solved if someone wrote something to "simply" convert LORA to QLORA (NF4), so it wouldnt need to be done on-the-fly.
MAC doesn't support FP8. Do you have any secondary recommendations to replace the FP8 model that would still give good quality images fast? MAC is optimized for FP16. And another question: is there a reason why you didn't pair the GGUF models with the turbo lora?
Does not speed up anything on my RTX 3090. Actually seems to slow it down. It takes less steps yes but the time each step is longer so ur doesn't go any faster
Did you know about this node called, "flux sampler parameters" from the comfy essentials node, the node combine, the seed, sampler, schreduler, steps, guidance, max shift and base shift and denoise, but with the ways she design she even let you do plot comparaison easily, between any of the previous parameter annonce, as it's not a drop down menu to select the option, you juste write anything you want to compare, like ( ex: "euler, deis", like that for the sampler, or "8,12,20" for the step in this new node, and it will generate consecativly, the picture while applying nthe parameter you enter. i fnd it really great surprised not a lot of flux workflow have it.
@@Jockerai hi, i don't know if it youtube, or else but i respond to your com a few hours ago, and now that i check i don't see it anymore... i can try sending it back to you in the comment ( the workflow i mean), but not sure that happen again, if you prefer tell me an other way i can send it to you, because i don't know but don't think there dm on youtube right ?
Thanks a lot for your guide mate! Strange thing is it works fine with ComfyUI, but absolutely doesn't work with Forge. It just crashes - "Connection errored out" in a browser and no errors in console. Both installed via StabilityMatrix. 4070, 32gb, SSD.
Hmmm, it is very interesting, but it is a hassle to change my working environment because there are so many updates every month, so I will wait until the lightweight version of Flux 1.1 is released.🤔
@@Sergei_CG you're welcome my friend. You can find the download link in the description of this video : ruclips.net/video/QmYoGPHdQfA/видео.htmlsi=kcgrTfd_o9miAkHs
Am i missing something here? I dont understand how my videocard with 16GB can load a 23GB model, infact when i try your workflow with everything exactly like yours it crashes when trying to load the Flux1-dev???
Hello , Greeting from India. I had a question . Is there a way where we can create images with a shirt (or any other cloth) image as an input ? Like an avatar to wear a specific cloth created speratly in a different picture? To keep the input clothing consistent for different characters. Its for a personal styling poc project. Thank you for your effort and time which you give back to community !!
@@StrikerTVFang actually I tested it with multiple loras and the results were different. With personal loras which I taught in my previous videos , "default" weight had much better results. For other loras you have to test because every lora could have different results. But in general there are no issues for stacking loras
Thank you very much! It really works! rx 5700 xt and 32 memory - 278 sec. How can this assembly be connected to "Flux in painting" and "image to image"?
@@myta6op402 you're welcome bro. It is simple and you can use it in all workflows. Just make the first and second nodes (load diffusion model node, Dual clip loader node) the rest can be any workflow such as Inpainting, img2img, controlnet etc. Check this video for a controlnet example : ruclips.net/video/pvU5fkBVHwI/видео.html
@@Jockerai Thanks, buddy! That's about how I imagined it! I got carried away with all this quite recently, so there are a lot of difficulties associated with it!
@myta6op402 Glad to hear it’s working out for you! Totally get the excitement, it’s easy to dive deep into this stuff once you start. Don’t worry about the challenges; everyone goes through a learning curve with these setups. If you ever run into specific issues or need help, feel free to reach out. Keep going, you’re doing great!
@@Jockerai You can't even imagine how you help people! I've spent so many hours on non-working, outdated and crooked builds! Your presentation style is just great! Everything is simple, accessible and the main thing is clear! You save people time and keep them motivated! Keep it up ! I hope the universe will reimburse you for your expenses! Thanks! I've been struggling with Flux inpaint for three days! I've tried many Flux models! The generation sometimes took 40 minutes and sometimes 30 seconds, but it didn't work properly! I needed to add a black cat sitting on the floor to the image! Anything but a CAT appeared in my image! And so, literally now, after reading your message and comparing everything that you described, I have a beautiful black cat in the picture!!!! 150 seconds, which is great for me regarding my PC and the output quality! It's epic, a delight and a sea of emotions!
@@myta6op402 Wow! Your message just made my day! Seriously, it’s awesome to see all your hard work and persistence paying off and that black cat story had me smiling! Thank you so much for the wonderful wish; it really means a lot . It’s messages like yours that keep me motivated to keep sharing and helping out. Keep experimenting and having fun with Flux you’re doing amazing, and I’m here if you ever need anything else! Let’s keep that creative energy going! 🚀✨
1: What GPU do you recommend for the price? An entry-level GPU? 2: With this Turbo Flux Lora, how can I use another Lora because I need to create images with a Lora but at the same time using Turbo Lora?
@@maxlux-xj9nh I recommend Only Nvidia GPUs, 3060 or above and for VRam 12G or above. For multi lora just click Add lora how many times you want and load more loras to use all of them at the same time. You can watch full tutorial here: ruclips.net/video/-Xf0CggToLM/видео.html
Dunno what I'm doing wrong but I get this error listed below, I noticed that the DUALCLIPloader boxes had a red ring around them: Prompt outputs failed validation DualCLIPLoader: - Value not in list: clip_name1: 'ViT-L-14-TEXT-detail-improved-hiT-GmP-TE-only-HF.safetensors' not in [] - Value not in list: clip_name2: 't5xxl_fp16.safetensors' not in []
Thanks for the research. But I still wonder how you can run DEV 23G locally. I have a 24GB Card, with let's say 20GB of available VRAM, and always run out of memory. What system do you have ?
I run it with a 12 GB card, the model gets partially loaded and then shuffled around (increases generation time of course, same goes for lora training). In ComfyUI there is an option to activate it, but I have no idea where, because for my card comfy activated it by default.
or you can try this : 1. Open System Properties: - Right-click on the Start icon and select System. - In the window that opens, under Related settings, click on Advanced system settings. 2. Access Performance Options: - In the System Properties window that appears, go to the Advanced tab. - Under the Performance section, click on the Settings button. 3. Virtual Memory Settings: - In the **Performance Options** window, go to the Advanced tab. - In this tab, under the Virtual Memory section, click on the Change button. 4. Select the drive where you have ComfyUI installed for example Drive C. Once it's selected, set the Initial size and Maximum size to the highest values you can. I put 35000 for initial size and 38000 for Maximum size.
It's working but taking some time. And after the image is generated It's showing all black. It's like the image is not loading . After the image is generated the Save Image node is still black. There is nothing in there. Am I doing something wrong?
i have a problem, whenever i change the prompt, my system load the model all over again. is there a fix for this? other than that, its all good. with 3060ti i get around 30sec gen time.
@@mohsen1208 if you watch the video I used a trick, when you change the prompt and green bar reaches to the sampler custom advanced node , cancel the process and click queue prompt again
Can't even download the model, it throws an error 401 and that's it. I tried with another Flux model and it didn't work either. When I try to generate, it just says: Prompt outputs failed validation VAELoader: - Value not in list: vae_name: 'ae.safetensors' not in ['sdxl_vae.safetensors', 'taesd', 'taesdxl'] DualCLIPLoader: - Value not in list: clip_name1: 'ViT-L-14-TEXT-detail-improved-hiT-GmP-TE-only-HF.safetensors' not in [] - Value not in list: clip_name2: 't5xxl_fp16.safetensors' not in []
try this : 1. Open System Properties: - Right-click on the Start icon and select System. - In the window that opens, under Related settings, click on Advanced system settings. 2. Access Performance Options: - In the System Properties window that appears, go to the Advanced tab. - Under the Performance section, click on the Settings button. 3. Virtual Memory Settings: - In the Performance Options window, go to the Advanced tab. - In this tab, under the Virtual Memory section, click on the Change button. 4. Select the drive where you have ComfyUI installed for example Drive C. Once it's selected, set the Initial size and Maximum size to the highest values you can. I put 35000 for initial size and 38000 for Maximum size.
great tutorial, i am trying to do this on a MacBook Pro M1, however i get this error: "Trying to convert Float8_e5m2 to the MPS backend but it does not have support for that dtype." is there a way to achieve this? Thanks!
Thank you, glad you enjoyed the tutorial! 🙌 Regarding your error: The issue you’re seeing comes from the MPS (Metal Performance Shaders) backend on Mac, which currently doesn’t support the Float8_e5m2 datatype. Unfortunately, Apple's M1/M2 GPUs don't have full support for all data types like some other GPUs do.
Yes sure you can add more Loras by pressing Add Lora button. But the point is in my tests "default weight" work better character loras. However you can test
@@JointyTv in my tests yes it works with the best quality but you should also keep in mind that the quality of the Lora itself is important as well, especially if it's a custom Lora
@@AIBizarroTheater try this : 1. Open System Properties: - Right-click on the Start icon and select System. - In the window that opens, under Related settings, click on Advanced system settings. 2. Access Performance Options: - In the System Properties window that appears, go to the Advanced tab. - Under the Performance section, click on the Settings button. 3. Virtual Memory Settings: - In the *Performance Options* window, go to the Advanced tab. - In this tab, under the Virtual Memory section, click on the Change button. 4. Select the drive where you have ComfyUI installed for example Drive C. Once it's selected, set the Initial size and Maximum size to the highest values you can. I put 35000 for initial size and 38000 for Maximum size.
Before releasing this LoRA, yes, but now if you set the weight type to default it will be fp16 and in 8 step it will generate faster now. It is up to you fp16 is better in quality and a bit worse in speed
In the video, I used the model's file size of 23 GB to make it clear which version of Flux I was referring to, as that's the most noticeable difference compared to the other versions. :)
@@Jockerai This is referred to as the base model. I'm not sure what you mean by comparing it to other versions. Fine-tuned models typically have different names or may come in entirely different formats, like GGUF. Let's avoid comparing models based solely on file size.
My problem with flux is at very image generation it has to load the model again and again, while sdxl no, I tried even with gguf q4 that has less gb than sdxl model and it's the same. Do you have any advise for it?
@@Jockerai 3060 12gb, ok then it's 100% my ram, i have 16gb, but is weird it happens only with flux even 6gb gguf model, while this doesn't happen with sdxl 12gb models, idk...
@@sephirothcloud3953 you can try this : 1. Open System Properties: - Right-click on the Start icon and select System. - In the window that opens, under Related settings, click on Advanced system settings. 2. Access Performance Options: - In the System Properties window that appears, go to the Advanced tab. - Under the Performance section, click on the Settings button. 3. Virtual Memory Settings: - In the **Performance Options** window, go to the Advanced tab. - In this tab, under the Virtual Memory section, click on the Change button. 4. Select the drive where you have ComfyUI installed for example Drive C. Once it's selected, set the Initial size and Maximum size to the highest values you can. I put 35000 for initial size and 38000 for Maximum size.
@@Jockerai Yes i already have 40000, i just boosted it by moving the models on SSD from HDD, 10x speed, with your turbo mode i render at 37s - 111s with re-loading, gguf5q 48s - 78 with reloading. I'll buy more ram. I can finally use Flux with ur turbo mode, insane, thank you! :)
No need to download. Just install last version of comfyui and download all necessary files and put them in proper directory then load my workflow and you are done!
Add hyper 8 and 16 staps.... Give me out best images i do test for now....more details... Quuality images ...and i melt 1image in 45seconds on 20 staps with hyper16 and use rtx 3080 with 10ramv...i do test hyper and turbo alpha and hyper give out better....
So can you place the diffusion_pytorch_model.safetensors (lora) inside ComfyUI\models\loras or inside ComfyUI\models\xlabs\loras folder? In your previous guide ruclips.net/video/txDFK-RcUq4/видео.html installing ComfyUI you mentioned you should not put Flux lora inside ComfyUI\models\loras but in this video you are placing the lora inside ComfyUI\models\loras ?? :(
@@I1Say2The3Truth4All good question. For loras that belong to xlab team , you need to put them into xlab lora. But for the rest of them the main folder of loras is correct and you can use them via Power lora loader node. Also personal loras like flux personal lora which I covered In one of my videos
The first image you present as exceptional, do you really believe what you are saying. The image is unfinished and broken. Look at the nose there is a clear cut. A lot of the rest is blurry. So if you really believe this is a good image then you have no idea what you are talking about. Sorry mate. There is no way to trick around quality. You get good images with 10 step from the base models as well. It always depends on the parameters.
@@Jockerai I don't know where the ratings of generative models come from, but ideogram is ten times better way more realistic and diverse results, even though it lacks detail.
hi I did exacly as You show on that film but command line showing that: got prompt Using pytorch attention in VAE Using pytorch attention in VAE Requested to load FluxClipModel_ Loading 1 new model loaded completely 0.0 9319.23095703125 True h:\ComfyUI_windows_portable>pause Press any key to continue . . . after press "Queue Prompt", i have 4060 16GB so is not to bad , but I dont get why is crushing, Do you have any idea?
Don’t forget to like and subscribe for more tutorials about Ai!👇🏻
www.youtube.com/@Jockerai?sub_confirmation=1
i can do it with 11gb vram??
@@TheGuitarnob yes you can
Could I get the prompt of the picture in the middle at 0:08 ? I'm asking for a friend.
Many channels dont show the stuff properly, some just sell the paid stuff, some dont have the pro tips. Glad to found your channel.
you're very welcome my friend💚
I run a i7 Q7700K bios overclocked. 64 gigs of ram and a 4080 12 gig Nvidia and generate in under a minuet with your settings. Thanks a ton Jockerai...
@@biggreg100 you're welcome bro ✨
With this turbo mode I render image at 37s with 306012gb and 10years old CPU 16gb ram, insane I can finally use flux! Thank you!
@@sephirothcloud3953 happy to here that 🤩
You're welcome mate✨
Exceptionally done! I mean the video. This is what content creators should do. Well done mate!
Thanks a ton! 🙌 I really appreciate your kind words. It means a lot coming from someone who knows the effort that goes into creating content. Glad you enjoyed the video, and I’ll keep working hard to bring more valuable stuff. Cheers, mate!
I am sometimes very skeptical about these things, but this worked! 1 gen at an extremely high quality took 32 seconds on my Nvidia 3060 12 gigs RAM, bro thank you so much! This is crazy!
I’m really glad it was useful for you, bro! It’s awesome to hear that it worked so well and so fast for you. Makes all the effort worth it!
@@Jockerai can you make a tutorial with this formula including controlnet? Tried it yesterday but i get an error due to out of memory hahah
Keep doing the awesome work!
@@Oxes yes probably it will be the next video about flux controlnet
You're lucky, I have a 3060Ti video card with 8 gigs of memory and FLUX itself takes 20 minutes to load, that's just too long.
RUclips surfaced your video in my feed, glad it did. You are an amazing creator. Thanks much mate. Subscribed
My god, bro. You are my saver! Thank you for this lifehack, now I got amazing results only in 12 seconds on my RTX 4080 Super in Forge with large 23Gb model. I'm really happy!
Realy amzing stuff 🙂
I tried this and I have to say what a revelation! I use a regular prompt of my own devising with a specific seed for testing and comparison.
At 20 steps it produces vastly superior lighting, shadows, and details of landscape, for a second or so more processing time. The skin tones are much better detailed.
At 8 steps it was a lot faster of course, but mildly disappointing. Nothing wrong with the result or any errors, just a little flat.
But a compromise of 12 steps gave a better result than 8 with, and slightly better than 20 without the new Lora.
Test and iterate at 8, again at 12, and produce at 20 for a great result.
Thanks for the heads-up man. I love it.
Thanks for sharing your experience! I appreciate it, but I have to disagree. The images I've generated with Lora at 8 steps, especially with weight type set to default, turned out amazing-both in depth and quality. And honestly, the difference in processing time between 20 steps and 8 steps is definitely more than just a second. I mentioned that in the video, my friend!
@Jockerai My testing was without any other changes than the number of steps. Weighting and other tweaks will always change the end results. From what I tried, the speed increase was in line with your results. 12 iterations with no other variations gave a better result, and 20 is massively better. The LoRa gives beautiful results with minimal effort and having the choice of anything between 8 and 20 steps, with equal and better results, gives people a hell of a lot of options. Changing weighting refines to potentially even better output, as is normal.
¡Muchísimas gracias, buen trabajo!
Game changer. This works very well.
@@Zanroff 🤘🏻🔥
Wow, awesome!!! 13 minutes on the GTX 1650 haha... thank you, bro!
@@999hamstein you're welcome bro ✨
great work thank you
@@CrustyHero you're welcome my friend
The fastest and great quality, no jokes. I was getting 60-90 secs to generate flux1dev, and now I just need 25 secs to generate it on my 3080ti laptop
Sure that can work. Problem is when you wanna use other LoRAs on top of the Turbo LoRA. Then you run into problems, since you cant have too many LoRAs active at the same time and excepting great results.
@@freneticfilms7220 I had amazing result even with 4 loras at the same time.
@@freneticfilms7220 the trick is to use multiple loras with this method(8 step lora) it's better to use default weight type
Excelente, me las crea en 15 segundos con mi RTX 4080
Yea, its cause HYPER has Hyper Lora inside it, which is ByteDance creation, if you want that, you can simply add it as LORA, its on their huggin site. Thing is, it has fairly detrimental impact on image quality. IMHO, all their HYPERs do.
Q8 GGUF, well Q8 is simply reduced quality INSIDE that model. Imagine it like JPEG compression and you get idea how it works. Another issue with GGUF is that its "dumb" compression, so reason why its slow is that it basically needs to be unpacked (or unzipped :D) to use it, which costs some extra HW processing time.
Solution outside this? Well, NF4 obviously. NF4 v2 to be precise. Only downside is that on ComfyUI to use it with LORA, you need a metric ton of VRAM as it needs to load everything into VRAM.
Could be solved if someone wrote something to "simply" convert LORA to QLORA (NF4), so it wouldnt need to be done on-the-fly.
MAC doesn't support FP8. Do you have any secondary recommendations to replace the FP8 model that would still give good quality images fast? MAC is optimized for FP16. And another question: is there a reason why you didn't pair the GGUF models with the turbo lora?
perfect ❤❤
Does not speed up anything on my RTX 3090. Actually seems to slow it down. It takes less steps yes but the time each step is longer so ur doesn't go any faster
Very cool! Try Valhalla if you want an easy start!
what's that could you explain?
Very good thanks ❤
You're welcome eshgh😍❤️
Did you know about this node called, "flux sampler parameters" from the comfy essentials node, the node combine, the seed, sampler, schreduler, steps, guidance, max shift and base shift and denoise, but with the ways she design she even let you do plot comparaison easily, between any of the previous parameter annonce, as it's not a drop down menu to select the option, you juste write anything you want to compare, like ( ex: "euler, deis", like that for the sampler, or "8,12,20" for the step in this new node, and it will generate consecativly, the picture while applying nthe parameter you enter. i fnd it really great surprised not a lot of flux workflow have it.
very interesting mate. I haven't use that yet. send me a workflow or link and I will try it out thank you for sharing your experiences💚
@@Jockerai hi, i don't know if it youtube, or else but i respond to your com a few hours ago, and now that i check i don't see it anymore... i can try sending it back to you in the comment ( the workflow i mean), but not sure that happen again, if you prefer tell me an other way i can send it to you, because i don't know but don't think there dm on youtube right ?
@@phenix5609 yes . Please send me in telegram. T.me/graphixm
Thanks a lot for your guide mate! Strange thing is it works fine with ComfyUI, but absolutely doesn't work with Forge. It just crashes - "Connection errored out" in a browser and no errors in console. Both installed via StabilityMatrix. 4070, 32gb, SSD.
glad i subscribed
Welcome bro happy to hear that✨😉
Ali mama
Ali mama is Ali's mom and Ali baba's wife probably 😁😁
Hmmm, it is very interesting, but it is a hassle to change my working environment because there are so many updates every month, so I will wait until the lightweight version of Flux 1.1 is released.🤔
Thank you for such a detailed video! The only question is where I can get the DualClipLoader files?
@@Sergei_CG you're welcome my friend. You can find the download link in the description of this video : ruclips.net/video/QmYoGPHdQfA/видео.htmlsi=kcgrTfd_o9miAkHs
Absolutely awesome! Can it further extended with control net etc.?
yes it works with everything. 😎😎
I have tested it with : Canny, inpainting, outpainting, multi loras, depth controlnet and...
Sir hi again, did u try fp8 versions?
@@genAIration yes it works easily
@@Jockerai yea but generation speed is also same according to my expreinces. So we don't need fp8 versions as well I guess
Am i missing something here? I dont understand how my videocard with 16GB can load a 23GB model, infact when i try your workflow with everything exactly like yours it crashes when trying to load the Flux1-dev???
Probably the weight of the model in your hdd not on your vram .
Hello , Greeting from India.
I had a question .
Is there a way where we can create images with a shirt (or any other cloth) image as an input ?
Like an avatar to wear a specific cloth created speratly in a different picture? To keep the input clothing consistent for different characters.
Its for a personal styling poc project.
Thank you for your effort and time which you give back to community !!
Thank you for this! I'm wondering, does this negatively affect stacking custom LoRas? Will they behave oddly with only 8 steps?
@@StrikerTVFang actually I tested it with multiple loras and the results were different. With personal loras which I taught in my previous videos , "default" weight had much better results.
For other loras you have to test because every lora could have different results. But in general there are no issues for stacking loras
Thank you very much! It really works! rx 5700 xt and 32 memory - 278 sec. How can this assembly be connected to "Flux in painting" and "image to image"?
@@myta6op402 you're welcome bro. It is simple and you can use it in all workflows. Just make the first and second nodes (load diffusion model node, Dual clip loader node) the rest can be any workflow such as Inpainting, img2img, controlnet etc.
Check this video for a controlnet example : ruclips.net/video/pvU5fkBVHwI/видео.html
@@Jockerai Thanks, buddy! That's about how I imagined it! I got carried away with all this quite recently, so there are a lot of difficulties associated with it!
@myta6op402 Glad to hear it’s working out for you! Totally get the excitement, it’s easy to dive deep into this stuff once you start. Don’t worry about the challenges; everyone goes through a learning curve with these setups. If you ever run into specific issues or need help, feel free to reach out. Keep going, you’re doing great!
@@Jockerai You can't even imagine how you help people! I've spent so many hours on non-working, outdated and crooked builds! Your presentation style is just great! Everything is simple, accessible and the main thing is clear! You save people time and keep them motivated! Keep it up ! I hope the universe will reimburse you for your expenses! Thanks! I've been struggling with Flux inpaint for three days! I've tried many Flux models! The generation sometimes took 40 minutes and sometimes 30 seconds, but it didn't work properly! I needed to add a black cat sitting on the floor to the image! Anything but a CAT appeared in my image! And so, literally now, after reading your message and comparing everything that you described, I have a beautiful black cat in the picture!!!! 150 seconds, which is great for me regarding my PC and the output quality! It's epic, a delight and a sea of emotions!
@@myta6op402 Wow! Your message just made my day! Seriously, it’s awesome to see all your hard work and persistence paying off and that black cat story had me smiling! Thank you so much for the wonderful wish; it really means a lot . It’s messages like yours that keep me motivated to keep sharing and helping out. Keep experimenting and having fun with Flux you’re doing amazing, and I’m here if you ever need anything else! Let’s keep that creative energy going! 🚀✨
Image quality goes up even higher if you use the turbo but with 20 steps.
@@Zuluknob very good idea thanks for sharing bro
Flux1 Dev main Model download doesn't work. Says "File wasn't available on site"
Change your internet or VPN or browser and test again
@@Jockerai In order to download this file you need to log in and accept license.
@@Sergei_CG Didn't see that originally, thanks
thank you very much, I don't find anywere the load diffusion model, could you let me know where is it?¡ thank you!
What kind of workflow do you upscale with?
these tow workflows :
ruclips.net/video/oVnTZLRgUC0/видео.html
.
ruclips.net/video/NKwXV5kgwD0/видео.html
I don't know why Comfy UI is using my RAM instead of my GPU, which makes the image generation process very slow :(
1: What GPU do you recommend for the price? An entry-level GPU? 2: With this Turbo Flux Lora, how can I use another Lora because I need to create images with a Lora but at the same time using Turbo Lora?
@@maxlux-xj9nh I recommend Only Nvidia GPUs, 3060 or above and for VRam 12G or above.
For multi lora just click Add lora how many times you want and load more loras to use all of them at the same time. You can watch full tutorial here:
ruclips.net/video/-Xf0CggToLM/видео.html
I am trying to add a ControlNet to this workflow but with no success, do you have a workflow in which you did so already?
I have already uploaded 2 videos including controlnet with this 8 step turbo watch them in the channel. Openpose and Depth map
And where can I get these "weight dtypes" in Load Diffusion Models ?
i get "does not accept copy argument" error on ksampler everytime try to use flux nf4 with lora
"I can't download because this model doesn't support loading. File wasnt available on site
When i loading your setting to COmfy I have this error: Power Lora Loader (rgthree)
Click manager and then click "install missing custom nodes" then restart comfyui and you are done
Hi, thanks, did it work with forge? and where can I get fp8_e5m2
@@ehabeltorky88 yes it works and on top menu bar there is section I forget it's name but when you open it you can see options for fp8-e5m2
@@Jockerai I think you mean Diffusion in Low Bits, thanks I'll try it.
Does the default flux model 23 GB work on 8Vram? Or should I go with the GUFF model
if you have 3000 series Nvidia GPUs or above yes it does.
please make a work flow like this where inpainting is supported . thanks
@@marsonal it is next video stay tuned my friend
@@Jockerai thanks for great work
Dunno what I'm doing wrong but I get this error listed below, I noticed that the DUALCLIPloader boxes had a red ring around them:
Prompt outputs failed validation
DualCLIPLoader:
- Value not in list: clip_name1: 'ViT-L-14-TEXT-detail-improved-hiT-GmP-TE-only-HF.safetensors' not in []
- Value not in list: clip_name2: 't5xxl_fp16.safetensors' not in []
Thanks for the research. But I still wonder how you can run DEV 23G locally. I have a 24GB Card, with let's say 20GB of available VRAM, and always run out of memory. What system do you have ?
I run it with a 12 GB card, the model gets partially loaded and then shuffled around (increases generation time of course, same goes for lora training). In ComfyUI there is an option to activate it, but I have no idea where, because for my card comfy activated it by default.
You should use main Flux dev easy but something is wrong with your settings. please send me the full error or log. I will help you out
Run dev in --fast on 4090 and I'm generating in 12secs with 25 steps at 1920x1080
or you can try this :
1. Open System Properties:
- Right-click on the Start icon and select System.
- In the window that opens, under Related settings, click on Advanced system settings.
2. Access Performance Options:
- In the System Properties window that appears, go to the Advanced tab.
- Under the Performance section, click on the Settings button.
3. Virtual Memory Settings:
- In the **Performance Options** window, go to the Advanced tab.
- In this tab, under the Virtual Memory section, click on the Change button.
4. Select the drive where you have ComfyUI installed for example Drive C. Once it's selected, set the Initial size and Maximum size to the highest values you can. I put 35000 for initial size and 38000 for Maximum size.
@@equilibrium964 just use a quantized (gguf or nf4) version in combination with the 8 step lora or even a merged checkpoint.
do you have guf AI model workflow where it can generate random images using an llm>??>
It's working but taking some time. And after the image is generated It's showing all black. It's like the image is not loading . After the image is generated the Save Image node is still black. There is nothing in there. Am I doing something wrong?
So I tried it the overall quality looks really good but in almost every picture containg hands the look awful. almost every hand was garbage.
I have tried but while loading model my comfyUI is crashing any solution? I have NVIDIA 12GB VRAM
i have a problem, whenever i change the prompt, my system load the model all over again. is there a fix for this?
other than that, its all good. with 3060ti i get around 30sec gen time.
@@mohsen1208 if you watch the video I used a trick, when you change the prompt and green bar reaches to the sampler custom advanced node , cancel the process and click queue prompt again
Can't even download the model, it throws an error 401 and that's it. I tried with another Flux model and it didn't work either. When I try to generate, it just says:
Prompt outputs failed validation
VAELoader:
- Value not in list: vae_name: 'ae.safetensors' not in ['sdxl_vae.safetensors', 'taesd', 'taesdxl']
DualCLIPLoader:
- Value not in list: clip_name1: 'ViT-L-14-TEXT-detail-improved-hiT-GmP-TE-only-HF.safetensors' not in []
- Value not in list: clip_name2: 't5xxl_fp16.safetensors' not in []
excuse me, were is Load DIffusion Model?
Why does it render for 10 minutes for 1 image on an RTX 4070 SUPER ?? And how can a 23 GB model fit into 12GB VRAM?
try this :
1. Open System Properties:
- Right-click on the Start icon and select System.
- In the window that opens, under Related settings, click on Advanced system settings.
2. Access Performance Options:
- In the System Properties window that appears, go to the Advanced tab.
- Under the Performance section, click on the Settings button.
3. Virtual Memory Settings:
- In the Performance Options window, go to the Advanced tab.
- In this tab, under the Virtual Memory section, click on the Change button.
4. Select the drive where you have ComfyUI installed for example Drive C. Once it's selected, set the Initial size and Maximum size to the highest values you can. I put 35000 for initial size and 38000 for Maximum size.
on low vram 4gb vram gguf version gives highest quality yes the speed is slow but this one is slower with less quality...
great tutorial, i am trying to do this on a MacBook Pro M1, however i get this error: "Trying to convert Float8_e5m2 to the MPS backend but it does not have support for that dtype." is there a way to achieve this? Thanks!
Thank you, glad you enjoyed the tutorial! 🙌 Regarding your error: The issue you’re seeing comes from the MPS (Metal Performance Shaders) backend on Mac, which currently doesn’t support the Float8_e5m2 datatype. Unfortunately, Apple's M1/M2 GPUs don't have full support for all data types like some other GPUs do.
@@Jockerai thank you for the reply! i hope it will very soon or a work around! again thanks for the great content..keep them cominggg
Will original version will work on 16 gb vram
Does this work with other lora aswell? FOr example a character lora?
Yes sure you can add more Loras by pressing Add Lora button. But the point is in my tests "default weight" work better character loras. However you can test
@@Jockerai working yes, but does it still hold the quality since most of the time when using mutiple loras the characters get merged
@@JointyTv in my tests yes it works with the best quality but you should also keep in mind that the quality of the Lora itself is important as well, especially if it's a custom Lora
what about show text phrases right?
my laptop keeps crashinggggggggggggggggggggggggggggggg why doesnt this work on my device
RTX 4070ti 12GB. generated a black empty image :P
Something is not set correctly in your workflow
There are no links in description...
Now links are on the description please check it again. The video was uploaded and I didn't noticed 😬
running extremely slow on 3060 12gbram, | 4/8 [04:43
@@AIBizarroTheater which weight did you used?
@@Jockerai thank for answering flux dev fp8
@@AIBizarroTheater try this :
1. Open System Properties:
- Right-click on the Start icon and select System.
- In the window that opens, under Related settings, click on Advanced system settings.
2. Access Performance Options:
- In the System Properties window that appears, go to the Advanced tab.
- Under the Performance section, click on the Settings button.
3. Virtual Memory Settings:
- In the *Performance Options* window, go to the Advanced tab.
- In this tab, under the Virtual Memory section, click on the Change button.
4. Select the drive where you have ComfyUI installed for example Drive C. Once it's selected, set the Initial size and Maximum size to the highest values you can. I put 35000 for initial size and 38000 for Maximum size.
am I the only one observing quality loss especially with human anatomy ?
is it fp16? I heard fp8 is better for 12gb?
Before releasing this LoRA, yes, but now if you set the weight type to default it will be fp16 and in 8 step it will generate faster now. It is up to you fp16 is better in quality and a bit worse in speed
What does the 23G mean?
Seeing that Dev is a 12B parameter model
23GB file size of the base model. Its misleading lol
Pretty sure 23G is the total file size.
@@voiceofreason9780 Yes, was just trying to discern your intention. Thx!
In the video, I used the model's file size of 23 GB to make it clear which version of Flux I was referring to, as that's the most noticeable difference compared to the other versions. :)
@@Jockerai This is referred to as the base model. I'm not sure what you mean by comparing it to other versions. Fine-tuned models typically have different names or may come in entirely different formats, like GGUF. Let's avoid comparing models based solely on file size.
Does this work in Flux ForgeUI?
I has to but I didn't test. Tell me if you test it thank you
Has anyone tried training with stereo images?
My problem with flux is at very image generation it has to load the model again and again, while sdxl no, I tried even with gguf q4 that has less gb than sdxl model and it's the same. Do you have any advise for it?
After the first generation it loads the models but for the next one it doesn't.
But however tell me your PC specs please
@@Jockerai 3060 12gb, ok then it's 100% my ram, i have 16gb, but is weird it happens only with flux even 6gb gguf model, while this doesn't happen with sdxl 12gb models, idk...
@@sephirothcloud3953 you can try this :
1. Open System Properties:
- Right-click on the Start icon and select System.
- In the window that opens, under Related settings, click on Advanced system settings.
2. Access Performance Options:
- In the System Properties window that appears, go to the Advanced tab.
- Under the Performance section, click on the Settings button.
3. Virtual Memory Settings:
- In the **Performance Options** window, go to the Advanced tab.
- In this tab, under the Virtual Memory section, click on the Change button.
4. Select the drive where you have ComfyUI installed for example Drive C. Once it's selected, set the Initial size and Maximum size to the highest values you can. I put 35000 for initial size and 38000 for Maximum size.
@@Jockerai Yes i already have 40000, i just boosted it by moving the models on SSD from HDD, 10x speed, with your turbo mode i render at 37s - 111s with re-loading, gguf5q 48s - 78 with reloading. I'll buy more ram. I can finally use Flux with ur turbo mode, insane, thank you! :)
With how bad skin textures look in Flux, it better generate 15 image a second on 4gb instead of this.
where can I get that fp8-e5m2 file? Thanks in Advance
No need to download. Just install last version of comfyui and download all necessary files and put them in proper directory then load my workflow and you are done!
@@Jockerai thank you. It works 😁
Add hyper 8 and 16 staps.... Give me out best images i do test for now....more details... Quuality images ...and i melt 1image in 45seconds on 20 staps with hyper16 and use rtx 3080 with 10ramv...i do test hyper and turbo alpha and hyper give out better....
I talk about dev hyper ....not any gguf hyper
how long did it take to generate?
Does this work in forge?
@@nideshmane5995 yes of course
Just another one LoRa, like Hyper... And for Comfy, not for Forge.
Nothing special
where is the seed? I always get the same images
In "Random noise" node. you can set it to randomize or give it a number manualy
@@Jockerai omg i'm stupid😄
So can you place the diffusion_pytorch_model.safetensors (lora) inside ComfyUI\models\loras or inside ComfyUI\models\xlabs\loras folder?
In your previous guide ruclips.net/video/txDFK-RcUq4/видео.html installing ComfyUI you mentioned you should not put Flux lora inside ComfyUI\models\loras but in this video you are placing the lora inside ComfyUI\models\loras ?? :(
@@I1Say2The3Truth4All good question. For loras that belong to xlab team , you need to put them into xlab lora. But for the rest of them the main folder of loras is correct and you can use them via Power lora loader node. Also personal loras like flux personal lora which I covered In one of my videos
Second, but no links 😢
Now links are on the description please check it again. The video was uploaded and I didn't noticed 😬
@@Jockerai Thank you!, you just got a new subscriber
The first image you present as exceptional, do you really believe what you are saying. The image is unfinished and broken. Look at the nose there is a clear cut. A lot of the rest is blurry. So if you really believe this is a good image then you have no idea what you are talking about. Sorry mate. There is no way to trick around quality. You get good images with 10 step from the base models as well. It always depends on the parameters.
why all women by flux have the same face with the specific chin - is that a joke?
There is a lora that fixes the "Flux Chin".
yeah I agree with you unless you explain the detail of her face or you can use "Russian, Asian, American or other countries". It is very useful
@@Jockerai I don't know where the ratings of generative models come from, but ideogram is ten times better way more realistic and diverse results, even though it lacks detail.
Thanks, I know that. Thats hilarious. Why don't they fix that? Do the makers of Flux have something like a kink with that type of chin? 🤣🤣😂
@@eromsetyb2524 yes it is great too
Can we have a tutorial in something not overly complicated, like SwarmUI instead of Comfy UI?
Get this error in CMD while running "clip missing: ['text_projection.weight']"
hi
I did exacly as You show on that film but command line showing that:
got prompt
Using pytorch attention in VAE
Using pytorch attention in VAE
Requested to load FluxClipModel_
Loading 1 new model
loaded completely 0.0 9319.23095703125 True
h:\ComfyUI_windows_portable>pause
Press any key to continue . . .
after press "Queue Prompt", i have 4060 16GB so is not to bad , but I dont get why is crushing,
Do you have any idea?
@@kargulo you may need update comfyUi. Go to the manager and click update all then restart comfyUi.
@@Jockerai after "update all" still the same , nothing change :)
Wow, this tutorial was amazing! I really needed it. Thanks a lot, my friend❤😍🫀
Thank you so much that was really uplifting mate❤️✨