Flux ControlNets are working with ROCm 6.1 in WSL and 6.2 in Linux - 6.2 is not supported yet in WSL both with PyTorch 2.4, both was tested on a RX7900XTX. Was some glitches initially with oom, but it seems to have been resolved now.
@@NerdyRodent Aye, having 24Gb VRAM for Flux is almost mandatory when using CNs or Lora now! Just wish there was a way to leverage multiple GPUs to ease the pain.
@@IdRadical No, as you would need to basically split the model and with the cards being so old it would cause more slow downs than it's worth it's one of the reasons AMD MI3xx series cards are getting so much attention over nVidia due to the fact they can hold a full Llama model in RAM. It's also a question of coding and supporting a dead end tech.
Hi Nerdy, thank you for that video it was extremely helpful. I got one question: How could i use flux to change someones clothes to something more formal, or something more artsy? Thank you :)
@Nerdy Rodent. Thanks. If a) I have only 12GB of VRAM, and, b) in my setup I want 2 Controlnets and an IPAdapter, and c) GGUF's don't work with Controlnets, then there's little chance of using Flux and I'd need to go back to SD1.5/XL, yes?
I have a laptop with 8GB of VRAM. The best option there is Hyper SDXL models (improved version of the faster Turbo/Lighting variants of SDXL which are faster than normal SDXL), it supports IPAdapters or multiple ControlNet at the same time with no issues.
@@yurinka Thanks. I wanted Flux because I thought it understood prompts better. From what I see, with my unusual prompts, it isn't really much better than SDXL
@@contrarian8870 Depends also on the model your using and how its baked. Flux is quite resource intensive. Best to wait until they get that sorted. and YES! SDXL is still very good!!! Use Comfy!!
control nets work with gguf, I tested depth map, could not get ip adapter to work. They have info about that on their site. Flux has far better prompt following, though the lighter versions are worse. try cfg 3.5 steps 50 with gguf dev, in my experience it needs lots of steps and generation is relatively slow, which makes it less pleasant to use. SDXL and even SD1.5 are still great, it's really nice to be able to generate stuff in a couple of seconds. Overall, I think besides the efficiency improvements likely to come we need better hardware.
@@vi6ddarkking they dont want open source. Just like with openAI being open until things got serious with 3.0. Emad is interviewed talking about how closed will always out compete open source because they have narrow focused teams, so...
Ironic that the most used model base is Pony, and they got that way by shredding and ripping SD to the bare bone framework and basically starting over.
@@cipher893 that sure is a lot of vram, possibly? The good recommendation you get from a lot of people now if you're looking to start is a 12 GB RTX 3060, specifically you're looking for the cuda cores and the vram, I'm not sure about the cuda cores on a 1080 ti, but I'm able to run stable diffusion 1.5 fairly well on my GTX 1050 TI mobile
@@MrMisturr Thanks for the recommendation and a little in depth information. I’ll start looking around. I have tried flux with comfyui a little but the rendering times seem to be too long for it to be fun to experiment with.
Could u or anyone maybe help me out. I have a Intel 14900k CPU and a AMD Radeon RX 7900XTX GPU. Ive download comfyUI and it runs on my Intel CPU and not graphics card. Then I watch a video on using Flux AI model and when I went to use it it said I needed a Navidia GPU. So does anyone have a easy reliable work around for this? If so please let me know and I'll send u my email or if u have a good video/link I would be greatly appreciative. Also when they say a Navidia gpu can it be any cheap 4080 or 4090 gpu like a PNY GeForce 4090 or no, it have to be a navidia brand? Thanks
comfyui is the new definition of "programmer move". loading 5 nodes for the same thing in the same config every single time and forever instead of just providing an easy to use controlent preprocessor (or not ) + controlnet model in a single node like a1111 controlnet
You may use the xlabs controlnet models with ksampler (or any sampler you use with flux). that works as with SDXL workflows (you have to move the models files in models/controlnet folder and use the usual load controlnet and controlnet apply nodes)
We need loras, controlnet, ipadapter, etc. for the fp8 version of the flux 1 schnell. Sorry guys, to do real things on your own machines with acceptable GPUs and human times you need this, the rest is academy. But we love you anyway. 😘😘😘
@@AntonioSorrentini Flux Dev Loras seem to work with FluxSchnell FP8. I've been using one I trained on myself and works fairly well. Style ones from X-labs are more subtle but definitely make a difference too.
@@robd7724 I tryed style ones from X-labs and while there is for sure a difference when using or not using them, I still can't get something usefull. But thank you for the info, I'll try to train one myself.
Oh, Nerdy Rodent, 🐭
he really makes my day; 😊
showing us AI, 🤖
in a really British way. ☕
😉
You had the first video on the internet about Flux
Flux will probably be the thing that will force me to get a new computer.
Thanks Nerdy!! Flux Episode 2 - A New Hope 😊
Ooo you appeared right under my thumb good job 😂
Flux ControlNets are working with ROCm 6.1 in WSL and 6.2 in Linux - 6.2 is not supported yet in WSL both with PyTorch 2.4, both was tested on a RX7900XTX. Was some glitches initially with oom, but it seems to have been resolved now.
Nice!
@@NerdyRodent Aye, having 24Gb VRAM for Flux is almost mandatory when using CNs or Lora now! Just wish there was a way to leverage multiple GPUs to ease the pain.
@@ThirdEnvoqation Didn't try it myself, but maybe ComfyUI-MultiGPU extension is what you want.
@@ThirdEnvoqation I have only 12 gigs. Youre totally right. I know SLI tech is basically dead but would it help?
@@IdRadical No, as you would need to basically split the model and with the cards being so old it would cause more slow downs than it's worth it's one of the reasons AMD MI3xx series cards are getting so much attention over nVidia due to the fact they can hold a full Llama model in RAM. It's also a question of coding and supporting a dead end tech.
I have test the ipadapter for flux its not giving me a good result
Hi Nerdy, thank you for that video it was extremely helpful. I got one question:
How could i use flux to change someones clothes to something more formal, or something more artsy?
Thank you :)
i would also like to know what a workflow should look like for such a task. i want to change hairstyles or haircolor of people in images
Where was that recent video replacing insightface for ComfyUI for SDXL?
Hey I'm getting this error while using IPAdapter
"Mat1 and Mat2 can not be multiplied"
Can you help me with this issue?
I'm trying to test out some of my custom LoRAs but getting a "list out of range" error.
ty
@Nerdy Rodent. Thanks. If a) I have only 12GB of VRAM, and, b) in my setup I want 2 Controlnets and an IPAdapter, and c) GGUF's don't work with Controlnets, then there's little chance of using Flux and I'd need to go back to SD1.5/XL, yes?
I guess it mostly on how long you’re willing to wait!
I have a laptop with 8GB of VRAM. The best option there is Hyper SDXL models (improved version of the faster Turbo/Lighting variants of SDXL which are faster than normal SDXL), it supports IPAdapters or multiple ControlNet at the same time with no issues.
@@yurinka Thanks. I wanted Flux because I thought it understood prompts better. From what I see, with my unusual prompts, it isn't really much better than SDXL
@@contrarian8870 Depends also on the model your using and how its baked. Flux is quite resource intensive. Best to wait until they get that sorted. and YES! SDXL is still very good!!! Use Comfy!!
control nets work with gguf, I tested depth map, could not get ip adapter to work. They have info about that on their site. Flux has far better prompt following, though the lighter versions are worse. try cfg 3.5 steps 50 with gguf dev, in my experience it needs lots of steps and generation is relatively slow, which makes it less pleasant to use. SDXL and even SD1.5 are still great, it's really nice to be able to generate stuff in a couple of seconds. Overall, I think besides the efficiency improvements likely to come we need better hardware.
Can you create a manga coloring workflow using comfy ui? I would appreciate it if you made a video about it
I think do is the best person to create it
Great video but having non commercial license of dev model makes it so much restrictive!
I'll keep harping on about this.
But Stability has no on but themselves to blame for losing the Open Source crown.
@@vi6ddarkking they dont want open source. Just like with openAI being open until things got serious with 3.0. Emad is interviewed talking about how closed will always out compete open source because they have narrow focused teams, so...
Ironic that the most used model base is Pony, and they got that way by shredding and ripping SD to the bare bone framework and basically starting over.
Is the good old 1080Ti (11gb) any good for this anymore?
@@cipher893 that sure is a lot of vram, possibly? The good recommendation you get from a lot of people now if you're looking to start is a 12 GB RTX 3060, specifically you're looking for the cuda cores and the vram, I'm not sure about the cuda cores on a 1080 ti, but I'm able to run stable diffusion 1.5 fairly well on my GTX 1050 TI mobile
@@MrMisturr Thanks for the recommendation and a little in depth information. I’ll start looking around.
I have tried flux with comfyui a little but the rendering times seem to be too long for it to be fun to experiment with.
can xlabs nodes run successfully on mac system?
Give it a go and see!
I how to make good inpainting without seams?
Could u or anyone maybe help me out. I have a Intel 14900k CPU and a AMD Radeon RX 7900XTX GPU.
Ive download comfyUI and it runs on my Intel CPU and not graphics card.
Then I watch a video on using Flux AI model and when I went to use it it said I needed a Navidia GPU.
So does anyone have a easy reliable work around for this? If so please let me know and I'll send u my email or if u have a good video/link I would be greatly appreciative.
Also when they say a Navidia gpu can it be any cheap 4080 or 4090 gpu like a PNY GeForce 4090 or no, it have to be a navidia brand? Thanks
any brand nvidia gpu will work
comfyui is the new definition of "programmer move". loading 5 nodes for the same thing in the same config every single time and forever instead of just providing an easy to use controlent preprocessor (or not ) + controlnet model in a single node like a1111 controlnet
Xsampler doesn't work on macs.
You may use the xlabs controlnet models with ksampler (or any sampler you use with flux). that works as with SDXL workflows (you have to move the models files in models/controlnet folder and use the usual load controlnet and controlnet apply nodes)
@@lennoyl yes i have tried that with mixed results. Are the results with the Xampler good?
We need loras, controlnet, ipadapter, etc. for the fp8 version of the flux 1 schnell. Sorry guys, to do real things on your own machines with acceptable GPUs and human times you need this, the rest is academy. But we love you anyway. 😘😘😘
@@AntonioSorrentini Agree
@@AntonioSorrentini Flux Dev Loras seem to work with FluxSchnell FP8. I've been using one I trained on myself and works fairly well. Style ones from X-labs are more subtle but definitely make a difference too.
@@robd7724 I tryed style ones from X-labs and while there is for sure a difference when using or not using them, I still can't get something usefull. But thank you for the info, I'll try to train one myself.
@@robd7724 well , i get out of memory error when i try the controlnets. Union or xlabs , both same.
cant wait for forge version. forge is way better