@@SamilTerzic ComfyUi just seems to crash immediately at Loading Checkpoint step when I try to generate. Just says "got prompt" in command window then the browser interface pops an error "reconnecting" Even with the lightest Shnell model. I guess a gtx1080 8GB vram and 24GB mem is not enough I'm told
Yeah, the one most of us can't run properly, let alone train. I really hope this gets some quality pruning, because otherwise it WILL be a flub and be forgotten within a month
@@df1ned I mean, as tech goes, no, it doesn't get easier to run. Everything requires more power, not less. You guys are starting to sound as bad as gamers 😂
@MaddJakd you do realise that it will literally have to, right? There is a lot of value in being easy to run. Also, the main hindrance in the further evolution of generative AI right now is how expensive they are becoming to train and run. Look at LLMs. Since ChatGPT, the main vector of evolution was "make this smaller and more efficient" with similar capabilities to the full size model. And what on earth do you mean "as bad as gamers"?
I've been using SD1.5 for long time, but this one really catch my attention. I was looking for guidance to setup just now but somehow unable to find a proper one and you're quick! Thanks a lot
@@Zefrem23 Yes, I think you are right, I have been trying out sd, fooocus and other AI popular tools on Mimicpc for the past time, my peers and I thought it could achieve the same result, but maybe I am not skilled enough and the result of the generated image is not beyond my expectation, it is a free online tool you can try it out!
brief and straightforward introduction with everything linked in the description, perfect for a quick start into Flux. Great video, as always! Thank you Olivio! ❤
I've been using a111/comfy for a few weeks to design pinterest pins. Usually I reject 90%, from my first couple of batches with Flux I'm wanting to keep 90%! Very impressed so far and when we get control nets, LoRA etc its going to be revolutionary.
I do the same things with the 3060 12GB But it takes 4 min and 6 sec per image . I'm seeing your Video from Bangladesh . Your work is awesome man . Take love
With 12GB VRAM it goes into low VRAM mode and even 4 steps take 1:20 or more. Worth it though! You can't get this level of prompt adherence with SD imo.
Wow this is absolutely stunning! Definitely seems like the local Model to go with for general artistic, digital, rendered images with very passable real people. The established 1.5 ecosystem is still preferable for specific use cases, nsfw materiel and so on. But this is an excellent option!
On the black forest labs website they say they are working on an video generation ai model, looking forward to learning more and I hope it will be open source with an interesting license
Yes, the lack of fine-tuning is the only minor drawback, but when I run flux on mimicpc it's loaded directly into it, and other people's workflows are really perfect, I only need to change it a little bit, and I get an image that's more beautiful than if I'd done it myself for 2 hours!
First of all, thanks Olivio for your video :)) Schnell worked for me with an rtx3060ti 8gig and 32 gig system memory in low vram mode. Simplest prompts take from 65 to 90 seconds while more complex prompts can take up to 170 seconds. Best combination for me is weight type fp8_e5m2 and fp16 clip and 4 steps (1024x1024 images). Sometimes images tend to be a bit blurry but it depends on the prompt.
A Rev Animated style using this as a base is all I need (for now...) edit: It looks like it can create similar style, but still not close to what I loved RevAnimated for.
I've read the creators of this model, Black Forest Labs, were the original people who helped create Mid Journey. So that would help explain why we get such great artistic results.
That team split from Stable Diffusion, I am happy to see that Midjourney staring freaking out about this, therefore they're starting to ban a lot of people without any reason and many restrictions like Trump or Biden I just trying a few prompt and get banned 5 days...
with about 30 million dollars from the investors they did a great job. what impressed me most is that this team of engineers is the same one that worked in Stability AI 😁😁 I've been trying out this model since this morning. I am speechless. Unprecedented quality, referring to the open source range. impressive. On my work station I tested the dev version. OH MY GOD! i have no words. i apologise if my english is not the best! 😂😂
@@mkDaniel I am using lowVram option in comfyui. Other than that everything is same. It takes long time to initially load the checkpoint. But once its loaded every other render takes less than a minute
ForgeUI with Flux1.S model from Civit Ai that has T5 fp8 and clip model with flux 1 schnell merged into one. I was able to run it on even a very old 4GB VRAM + 16GB RAM laptop, to generate a 1280x720 image, which I would not do again because it took 15 minutes for 4 steps. But it works.
I wonder what the specs are for finetune training, if mixing is possible and if it can make and use Lora like models etc. Stuff like is what really determines if these models have longevity. The massive Vram requirement is certainly a blocker for most of the community. Unfortunatly consumer grade GPUs havent caught up to that Vram amount for their average models. So its mostly going to remain a small section of the community that will use it until better GPUs become affordable or at least smaller models are made available
I would say right now it is as close as you get to ideogram level of performance but this will become one the best models out there with finetuning also has anyone tried running using the API call for the full version that might be even better even though its 5 cents per image. My suggestion is to use prompts similar to ideogram which is much more descriptive.
The details in the eyes are really impressives ! It's like a real photo ! there we have it now, who needs a photograph ? Anybody will be able to be a photograph without owning a DSLR camera :D and no need for models, no need for makeup artist. Only a browser and a computer in the cloud ^^
This kind of thought is what keeps people away from AI, it's better to say AI is here to enhance workflows and as an expansion of the digital space, not as a substitute killer of industries.
@@DanikaOliver Exactly. As a photographer I find his statement completely disrespectful and im for generative AI being a tool to enhance our work but saying that is just a slap on the face to the entire art of photography and art in general. Photographers put years into mastering their technique, understanding light, composition, and timing. it's about seeing the world through a unique lens, capturing a moment, a feeling, a story not just about high-resolution eyeballs on a screen.
I've been playing with FLUX for the last 24 hours and so far it's been great. I can't wait till people train some Loras and we get the rest of the tools like controlnet. EDIT: Also I get good fast results w/these settings using RTX4090 unet_name - Flux1-schnell weight_dtype - fp8_e4m3fn (I found this works better/faster than leaving it at default) clip_name1 - t5xxl_fp8_e4m3fn (this works better/faster than the f16p, especially since I only have 32G of RAM) also I used the dpmpp_2m sampler with the Karras scheduler, and 10 to 20 steps. I find its crisper and more realistic than the others (just like if you were using SDXL).
Yes, I think flux is awesome, I tried Stable diffusion on Mimicpc, and of course this product also includes popular tools for AI such as RVC, Fooocus, and others. I think it handles detail quite well too, I can't get away from detailing images in my profession and this fulfills exactly what I need for my career.
The model used in the video is quantized: the numbers that make up the model are converted to use smaller number formats (they use only 8 bits instead of the larger original size) to save VRAM at the cost of a very negligible (less than 1% usually) drop in quality. It can increase performance sometimes too.
I was desapointed by the evolution of sdxl, its still very hard to run theses days for most people. And now the line is even higher, and ok its based on transformers and can have a few improvements but even with a lot of work it will still be heavier than xl. but the model is just impressive, i would risk saying thats sora level of composition for image.
VAE is confusing: I have the ae.sft file as my VAE; but at 08:22 you also have another VAE diffusion_pytorch_model - I have not d/loaded this, but two differing VAE files are confusing?
I’ve been saying for over a year that Stabilty was a dead end. Turns out it’s because all the talent left after SDXL. That said, not a big fan of the non commercial license on Dev. Happy we at least got Apache on Schnell though
When I hit the "Queue Prompt" button, I get: "Error occurred when executing UNETLoader: module 'torch' has no attribute 'float8_e4m3fn'". I have all of the mentioned files in the right places. Updated my ComfyUI (which I hadn't used in a while). What am I missing here?
I decided to re-download ComfyUI from Github and reinstall it, just overwriting all of the files that were already there. Then I made sure that all of the Flux files were still there according to Olivio's instructions (they were). After doing that, the Queue Prompt button gives this error: "Prompt outputs failed validation DualCLIPLoader: - Value not in list: type: 'flux' not in ['sdxl', 'sd3']" .....the weird thing about this is that the DualClipLoader type IS "flux" when I load the workflow. But if I click on that field to pull down the choices, I get only SD3 and SDXL, no Flux. And so then you can't choose "flux" again. It's like the text "flux" appears as a value for "type" but it's not actually there as a valid choice. Hmmmmm......
@@JustAFocusI had the same issue, didn’t do a clean reinstall but ran the bat file, update/update_comfyui.bat in a CMD window, I then started up ComfyUI and when the program came up ran the ComfyUI-Manager and chose update All. That seems to have fixed it. Problem with ComfyUI is that it doesn’t seem to clean out or update everything. Sometimes you might also need to run update_comfyui_and_python_dependencies.bat after a reinstall if you get Python version errors.
@@JustAFocus try running update_comfyui_and_python_dependencies.bat in the updates folder, I’m not sure why a fresh installation doesn’t work unless the downloaded installer is somehow out of date and isn’t the most recent version.
Same here, any news on this? Got a brandnew virtual machine installation everything updated and also getting the "Error occurred when executing UNETLoader: module 'torch' has no attribute 'float8_e4m3fn'" error. Maybe the new version broke something?
Месяц назад
My RTX 3060 (12GB) will even run the 23GB FP16 FLUX model. Loads from NVME & spits out an image within 35 seconds for non DEV (images made after are faster as model already loaded).
So with a 4080 Super, at pretty much default values of the workflow that is provided, I was sitting here for an hour maybe two, and then I gave up after it was on 5% for another 10 minutes.
i wish we got chat UI to create nodes, setting nodes, and use nodes... rather than ComfyUI Manager (isn't it will be cool if we just chat the bot to generate custom nodes we need?) any one know chatbot fine tuned for comfyui?
my 3060 12GB VRAM takes about 80sec for one image with the dev fp8 version. loading is okay but need a lot of RAM (not VRAM) ca. 40GB. Also the text encoder models are the same as for SD3. so if you already have them download no need to download again
My Project idea for a performance would require a output of 2-3 fps. Ideally higher. Images don´t have to be crazy detailed or 4k or whatever. Is this model "Real-Time" capable, given a couple of 4090s? Webcam feed of people dancing, AI projection onto the backwall is the idea.
Anyone else getting a "Node not found: CheckpointLoaderSimple.ckpt_name" error? I tried a different workflow setup and then it works, but following the basic setup mentioned in this video gives me that error. Everything is up to date, as far as i can tell
My pc runs 2080 super 8gb vram and runs with no issue it just takes 2.5 minute per image i heard 3090 or 4090 runs same image in 14-15 seconds
awesome, i pinned you comment
@@SamilTerzic ComfyUi just seems to crash immediately at Loading Checkpoint step when I try to generate. Just says "got prompt" in command window then the browser interface pops an error "reconnecting"
Even with the lightest Shnell model.
I guess a gtx1080 8GB vram and 24GB mem is not enough I'm told
i am getting 3 min and 20 secs aprox on a 2080 Ti with 11 GB. :(
About to find out on a 4090 will let ya know.
On my 4090 it takes 4s to 6s seconds.
Finally, the SD3 we deserve!
Yeah, the one most of us can't run properly, let alone train. I really hope this gets some quality pruning, because otherwise it WILL be a flub and be forgotten within a month
@@df1ned hahahaha you are a joke
Yes and no. Still lacks a commercial license on the base model.
@@df1ned I mean, as tech goes, no, it doesn't get easier to run.
Everything requires more power, not less. You guys are starting to sound as bad as gamers 😂
@MaddJakd you do realise that it will literally have to, right? There is a lot of value in being easy to run. Also, the main hindrance in the further evolution of generative AI right now is how expensive they are becoming to train and run. Look at LLMs. Since ChatGPT, the main vector of evolution was "make this smaller and more efficient" with similar capabilities to the full size model. And what on earth do you mean "as bad as gamers"?
I never thought my pic 01:54 will be in the video considering the sheer amount of pics shared by community. Thanks Olivio 🥰
A surprise, to be sure, but a welcome one ;-)
wow, really? this is amazing! cogratulations!
Pog
I've been using SD1.5 for long time, but this one really catch my attention. I was looking for guidance to setup just now but somehow unable to find a proper one and you're quick! Thanks a lot
Make sure that your PC is up for the task ;-)
It's so exciting that it rivals Midjourney !!
It’s annoying how good Midjourney is
@@angryox3102 I like midj but hate that they force you to use discord
It has such fine control over the aesthetic and style through prompting alone that you won't need to use artist names or even loras in most cases.
I've been using the online demo since yesterday and it just keeps blowing my mind with how strong the visuals and prompt following are.
Also way less lora, as it understands what I want, without me needing to brute force with small additional data. Mind blowing
@@Zefrem23 Yes, I think you are right, I have been trying out sd, fooocus and other AI popular tools on Mimicpc for the past time, my peers and I thought it could achieve the same result, but maybe I am not skilled enough and the result of the generated image is not beyond my expectation, it is a free online tool you can try it out!
Running really well on my 3090 24GB. Very impressed with the quality.
brief and straightforward introduction with everything linked in the description, perfect for a quick start into Flux. Great video, as always! Thank you Olivio! ❤
Those are some stunning results. Especially text and eyes... wow!
I've been using a111/comfy for a few weeks to design pinterest pins. Usually I reject 90%, from my first couple of batches with Flux I'm wanting to keep 90%! Very impressed so far and when we get control nets, LoRA etc its going to be revolutionary.
Amazing Model!
With an RTX4080 it takes about 35s. to generate one 1024x1024 image.
Really Happy for the Ai community getting the new model they deserve
works fine on a 4070ti, thx for the explaination...
I do the same things with the 3060 12GB But it takes 4 min and 6 sec per image . I'm seeing your Video from Bangladesh . Your work is awesome man . Take love
Cherry-picked or not, those shots look great. And yeah, this is what SD3 "should" have looked like. :)
With 12GB VRAM it goes into low VRAM mode and even 4 steps take 1:20 or more. Worth it though! You can't get this level of prompt adherence with SD imo.
The new king --that gobbles up VRAM and HDD space like crazy 😞
Any numbers please!
The quality's gotta come from somewhere!
Can run on 12 GB VRAM.
Don't spread nonsense. this thing can be run free.
And it is very early days, SD took hard work to get it to run on consumer hard when first released.
@@nix9409 I like my AI run locally
I was asking Microsoft's Copilot about flux.1, and it pointed to you. Thanks for the video!
80% perfect hands! Amazing.
I never feel like a new model has truly "arrived" until I get to see the Olivio take on it. You're the king of "cozy" AI coverage :D
Wow this is absolutely stunning! Definitely seems like the local Model to go with for general artistic, digital, rendered images with very passable real people. The established 1.5 ecosystem is still preferable for specific use cases, nsfw materiel and so on. But this is an excellent option!
On the black forest labs website they say they are working on an video generation ai model, looking forward to learning more and I hope it will be open source with an interesting license
I tried it last night. My video card (20GB!) barely handles it, but the results are sooooo goood!
Looks like a lovely model.
Hope the fine tuning video will hit soon. This model seems quite a bit different to SD, so I suspect fine tuning will be different.
Yes, the lack of fine-tuning is the only minor drawback, but when I run flux on mimicpc it's loaded directly into it, and other people's workflows are really perfect, I only need to change it a little bit, and I get an image that's more beautiful than if I'd done it myself for 2 hours!
First of all, thanks Olivio for your video :)) Schnell worked for me with an rtx3060ti 8gig and 32 gig system memory in low vram mode. Simplest prompts take from 65 to 90 seconds while more complex prompts can take up to 170 seconds. Best combination for me is weight type fp8_e5m2 and fp16 clip and 4 steps (1024x1024 images). Sometimes images tend to be a bit blurry but it depends on the prompt.
Honestly, this is mind blowingly, stupidly good. Hope to see some inpainting, loraa, upscaling workfows soon. This is what SD3 should have been.
So excited to try out the Flux model!
A Rev Animated style using this as a base is all I need (for now...)
edit: It looks like it can create similar style, but still not close to what I loved RevAnimated for.
I've read the creators of this model, Black Forest Labs, were the original people who helped create Mid Journey. So that would help explain why we get such great artistic results.
@@KDawg5000 I think they are the ones that worked on SD3, not on MJ...
That team split from Stable Diffusion, I am happy to see that Midjourney staring freaking out about this, therefore they're starting to ban a lot of people without any reason and many restrictions like Trump or Biden I just trying a few prompt and get banned 5 days...
@@stefanoangeliph Yeah, I realized that after I replied, but was too lazy to go back & edit, lol
Great -- Thanks, Olivio! (5-fingered hands!)
with about 30 million dollars from the investors they did a great job. what impressed me most is that this team of engineers is the same one that worked in Stability AI 😁😁
I've been trying out this model since this morning. I am speechless. Unprecedented quality, referring to the open source range. impressive. On my work station I tested the dev version. OH MY GOD! i have no words. i apologise if my english is not the best! 😂😂
Great video! And it was a fun little surprise event on the discord channel too!
Thanks you very much !
Hopefully we get an optimized version that can run on most GPU, I remember when SD came out it also required insane VRAM
True. It Is pretty slow on 4080. Up to 7 min (sometimes less than 3) on dev model with 20 iterations.
@@mkDaniel it takes around 30-50 secs for me on 4080
@@sandeepm809 how?
@@mkDaniel I am using lowVram option in comfyui. Other than that everything is same. It takes long time to initially load the checkpoint. But once its loaded every other render takes less than a minute
@@mkDaniel Someone said they are running it on a 4GB Gpu ,it takes about 3min to generate. I am running a RTX 4060 with XEON CPU and 16GB ram
ForgeUI with Flux1.S model from Civit Ai that has T5 fp8 and clip model with flux 1 schnell merged into one. I was able to run it on even a very old 4GB VRAM + 16GB RAM laptop, to generate a 1280x720 image, which I would not do again because it took 15 minutes for 4 steps. But it works.
I wonder what the specs are for finetune training, if mixing is possible and if it can make and use Lora like models etc. Stuff like is what really determines if these models have longevity. The massive Vram requirement is certainly a blocker for most of the community. Unfortunatly consumer grade GPUs havent caught up to that Vram amount for their average models. So its mostly going to remain a small section of the community that will use it until better GPUs become affordable or at least smaller models are made available
This is Awesome Thank you
I would say right now it is as close as you get to ideogram level of performance but this will become one the best models out there with finetuning also has anyone tried running using the API call for the full version that might be even better even though its 5 cents per image. My suggestion is to use prompts similar to ideogram which is much more descriptive.
4:00 I might get a sleepless night if you consider that as an Autobot. My man😂
Huge thanks for this
The details in the eyes are really impressives ! It's like a real photo ! there we have it now, who needs a photograph ? Anybody will be able to be a photograph without owning a DSLR camera :D and no need for models, no need for makeup artist. Only a browser and a computer in the cloud ^^
This kind of thought is what keeps people away from AI, it's better to say AI is here to enhance workflows and as an expansion of the digital space, not as a substitute killer of industries.
@@DanikaOliver Exactly. As a photographer I find his statement completely disrespectful and im for generative AI being a tool to enhance our work but saying that is just a slap on the face to the entire art of photography and art in general. Photographers put years into mastering their technique, understanding light, composition, and timing. it's about seeing the world through a unique lens, capturing a moment, a feeling, a story not just about high-resolution eyeballs on a screen.
I used your comment as a prompt for Flux and got an amazing portrait of a woman.
@@TechieJunk now I'm curious.
This is the future if image AI generation; it runs great on a 4090
Hope my 4080 can handle it.
Obviously, you have the Top 1 consumer friendly graphic card on the market 😁
Dah...
Yes, that's a gundam !!!!
I've been playing with FLUX for the last 24 hours and so far it's been great. I can't wait till people train some Loras and we get the rest of the tools like controlnet.
EDIT: Also I get good fast results w/these settings using RTX4090
unet_name - Flux1-schnell
weight_dtype - fp8_e4m3fn (I found this works better/faster than leaving it at default)
clip_name1 - t5xxl_fp8_e4m3fn (this works better/faster than the f16p, especially since I only have 32G of RAM)
also I used the dpmpp_2m sampler with the Karras scheduler, and 10 to 20 steps. I find its crisper and more realistic than the others (just like if you were using SDXL).
Yes, I think flux is awesome, I tried Stable diffusion on Mimicpc, and of course this product also includes popular tools for AI such as RVC, Fooocus, and others. I think it handles detail quite well too, I can't get away from detailing images in my profession and this fulfills exactly what I need for my career.
Very cool! Thanks!
Thx good sir for the video.
Hi and thanks for the video! I noticed the model you downloaded in the Unet folder is different than the model they suggest (flux1-dev.sft)
The model used in the video is quantized: the numbers that make up the model are converted to use smaller number formats (they use only 8 bits instead of the larger original size) to save VRAM at the cost of a very negligible (less than 1% usually) drop in quality. It can increase performance sometimes too.
@@donaldhawkins6610 Thank you for the reply, much appreciated!
looks amazing
Thank you Olivio
Amazing model
Testing my 4090 today 🚀
Render layers ?
Comfyui workflows ?
I was waiting for your video on Flux. I knew it was going to drop soon!!🎉
I found an flux dev fp8 model it's 11gb it's half the size of the sh version
Ryzen9+4090+192GB it's ready to take the challenge :)
Schnell is probably best with the default 4 steps,
but you can get reduced quality images with even 1 step.
Yeah, they need to keep it online.
Hopefully in a few months us plebs with our 8GB cards will get some love.
it would be cooler if FOOOCUS came with a solution to include it.
I was desapointed by the evolution of sdxl, its still very hard to run theses days for most people. And now the line is even higher, and ok its based on transformers and can have a few improvements but even with a lot of work it will still be heavier than xl. but the model is just impressive, i would risk saying thats sora level of composition for image.
VAE is confusing: I have the ae.sft file as my VAE; but at 08:22 you also have another VAE diffusion_pytorch_model - I have not d/loaded this, but two differing VAE files are confusing?
Well it uses both. One is text encoder.
Checkout Pinokio Forge 1 Click Installer with the NF4 Model
It would have been so very nice if we also saw the prompts used for some of the images. (wish list)
If this competes with midjourney does it mean midjourney always was slightly better because they used 24gb models instead of 6gb ones?
Nice guide :)
Running on an RTX3060 with 12gb ram. Is slow but works
how much slower?
@@KimiMorgam 1 hour per image.
@@KimiMorgam around 2.5 min per image with 20 steps
how long it takes you?
@@isabellatang7753 25 seconds per image 512x512 10 steps cfg 1
Thanks as always Olivio. Love from Cape Town.
I still think realistic pony models are superior for nsfw but that can change tomorrow
thank you
Lost me with downloading a ton of folders/files, workflows, etc, etc. Jeez!
Looks promising. Thank you.
I’ve been saying for over a year that Stabilty was a dead end. Turns out it’s because all the talent left after SDXL. That said, not a big fan of the non commercial license on Dev. Happy we at least got Apache on Schnell though
This is great. Any idea how to do it on a high end Mac M3?
I Hope their will be some hacks and optimisations to make it run on my 3600 6g ram laptop some day 😢. Really waiting for that time to try it out
One small 'bug' in Flux.Schnell - gigantic hands - anatomically correct, but H U G E ! ! ! :)
top!! 😮👍
Thanks for the great video. Is comfyUI, the only way to run flux locally?
❤❤❤
When I hit the "Queue Prompt" button, I get: "Error occurred when executing UNETLoader: module 'torch' has no attribute 'float8_e4m3fn'". I have all of the mentioned files in the right places. Updated my ComfyUI (which I hadn't used in a while). What am I missing here?
same here, anyone?
I decided to re-download ComfyUI from Github and reinstall it, just overwriting all of the files that were already there. Then I made sure that all of the Flux files were still there according to Olivio's instructions (they were). After doing that, the Queue Prompt button gives this error:
"Prompt outputs failed validation
DualCLIPLoader:
- Value not in list: type: 'flux' not in ['sdxl', 'sd3']"
.....the weird thing about this is that the DualClipLoader type IS "flux" when I load the workflow. But if I click on that field to pull down the choices, I get only SD3 and SDXL, no Flux. And so then you can't choose "flux" again. It's like the text "flux" appears as a value for "type" but it's not actually there as a valid choice. Hmmmmm......
@@JustAFocusI had the same issue, didn’t do a clean reinstall but ran the bat file, update/update_comfyui.bat in a CMD window, I then started up ComfyUI and when the program came up ran the ComfyUI-Manager and chose update All.
That seems to have fixed it.
Problem with ComfyUI is that it doesn’t seem to clean out or update everything.
Sometimes you might also need to run update_comfyui_and_python_dependencies.bat after a reinstall if you get Python version errors.
@@JustAFocus try running update_comfyui_and_python_dependencies.bat in the updates folder, I’m not sure why a fresh installation doesn’t work unless the downloaded installer is somehow out of date and isn’t the most recent version.
Same here, any news on this? Got a brandnew virtual machine installation everything updated and also getting the "Error occurred when executing UNETLoader: module 'torch' has no attribute 'float8_e4m3fn'" error. Maybe the new version broke something?
My RTX 3060 (12GB) will even run the 23GB FP16 FLUX model. Loads from NVME & spits out an image within 35 seconds for non DEV (images made after are faster as model already loaded).
RTX 4070 Ti 11GB, executed the same query with the same setup 211.04 seconds
flux1-schnell-fp8
It also has a 22 GB version. Which one is better and what is the difference between them?
Dev version fp8 (12 gigs version) is what I use (20 steps), it's way better than the schnell version.
@@kkryptokayden4653 aura_flow_0.2
what is this Did you test this? How is the model?
16 gigs
Thanks
So with a 4080 Super, at pretty much default values of the workflow that is provided, I was sitting here for an hour maybe two, and then I gave up after it was on 5% for another 10 minutes.
that seems off. i have a 4080 too and it renders in a minutes
espero y mis 4gbs jalen pipipi.
Thanks brooo, this model looks fantastic. Do you know of any new models for img to video with beginning and end frame generation, or prompt inject?
i wish we got chat UI to create nodes, setting nodes, and use nodes... rather than ComfyUI Manager (isn't it will be cool if we just chat the bot to generate custom nodes we need?)
any one know chatbot fine tuned for comfyui?
is a non commercial model isn't it ?
my 3060 12GB VRAM takes about 80sec for one image with the dev fp8 version. loading is okay but need a lot of RAM (not VRAM) ca. 40GB.
Also the text encoder models are the same as for SD3. so if you already have them download no need to download again
Does this censor?
@@SkyCuration yes to an extent
I am unable to copy and paste the files to the required folders like unet.. how to fix it?
My Project idea for a performance would require a output of 2-3 fps. Ideally higher. Images don´t have to be crazy detailed or 4k or whatever. Is this model "Real-Time" capable, given a couple of 4090s? Webcam feed of people dancing, AI projection onto the backwall is the idea.
Is it still heavily censored though like base SD3?
No
Do you need to change clip / vae models for different base models?
does 1650 4g has a chance? :(
5:29 analog clock Roman numerals are glitchy/off, which is the same problem as with SD.
Anyone else getting a "Node not found: CheckpointLoaderSimple.ckpt_name" error? I tried a different workflow setup and then it works, but following the basic setup mentioned in this video gives me that error. Everything is up to date, as far as i can tell
Mine for some reason gives up on the VAE decode. Dont know why. I have 16GB of VRAM. Ill eventually figure it out. Thanks for the video!
is there a collection of great workflows somewhere? ...like upscaler, neg prompts, img2img,...
My comfyui can't find the nodes for the update. :/