#### Links from my Video #### Get my SHIRT: www.qwertee.com/ with Code "Olivio" Get my WORKFLOW: www.patreon.com/posts/redux-advanced-116592360 github.com/kaibioinfo/ComfyUI_AdvancedRefluxControl
The Redux model is certainly welcome, and Black Forest is amazing; that said, we still have a way to go before we are back at IP Adapter levels of control. As one might expect based on the nature of the model and the workflows, Redux on its own doesn't really differentiate between or understand content vs style. The generation will always be influenced by both, and until someone starts to figure out which blocks are which (if this even works for Flux), it's going to be a bit of a blunt force tool.
Yea, I think Flux just takes a different approach. I was actually able to get some cool results unsampling multiple images with Fluxtapose and then masking the latents together to create melded compositions. I think it's an unfortunate product of the marketing, but so many people come to these tools thinking that it doesn't take creative imagination and expect everything spoon fed to them on a silver platter.
thanks for those advancedrefluxcontrol nodes. I tried to do something with a node named Style model apply advanced (not the advancedrefluxcontrol ones) but , even if some of the results were very nice, it was very difficult to use (only very low strength around 0.1 made the prompt be important).
it seems quite similar to just interrogate images and mix the resulting prompts and maybe even mix some control depth if you really want similar composition. But this is still a cool approach
Thank you so much for all your informative videos. Could you please provide a way in which we can change the style of image without changing its composition, face, look. conversion style would be vector illustration, anime, watercolor painting. I tried lot of workflows. each one have minor defects like color change, face change, look change. Pix2Pix, fluxtopaz, and many. Could you please provide any suggestions.
This worked yesterday fine. Today after adding these new Advanced Nodes all I get is gradients? I had already taken the original simple Nodes and Doubled everything and had it all working. Now today whether I use my setup or your setup all I get is gradients. I even updated the ComfyUI and everything is gradients. I took pngs I made yesterday with my dual setup and instead of the proper images I get gradients only.
@@TyreII i didn't say it's private, i said it's your private matter. meaning, they don't check your prompts for what you create. There is no censorship, because you litereally rent a server and run your own comfyui. however server rules may apply in the tos, even if they don't check what you generate
@@BrianZajac Any model is image to image. You just encode the input image into a latent image and feed that into the sampler to generate from instead of an empty latent.
did you download the image and drag it into comfy for the workflow? make sure your comfy is updated. make sure your flux and clip loader are set to the correct models and modes
Try looking up Flux Dev NF4, which is a quantized model that claims to run on as little as 6GB. I can't vouch for it though, since I haven't tried it myself.
Did you ever thought about the person who only has 6 or 8 g vram? This Redux is so slow in low vram, and you acting like low vram can run it on comfyui easily. Please add a note to tell the pepole low vram can't play with it!!!
Did you ever think that if you want to do cool stuff on the cutting edge, you need to invest in your equipment? Not everything in life is free... It's not anyone else's fault that you have a potato for a computer.
Eh... At 3:19 I highly doubt you achieve this with the help of Redux only man. when I input the prompt and I can get pretty much the same effect just use Flux native model! Because your prompt have already done the 99.999% of the work!!!! Hence No need the redux model at all? You should post a before & after
@@CharlesLijt you seem to miss the point here. Redux is image guidance. It's not about achieving more, it's about getting similar results to the image input
Calling Flux Redux ‘trash’ is such an overreach. This tool is state-of-the-art for generating high-quality image variations and restyling with flux. Sure, no tool is perfect, but dismissing something without pointing out specific flaws feels more like a hot take than a genuine critique.
If you have such incredible insight, why don't you use it to get more than a few hundred views on your RUclips channel? It always seem to be people with incredibly amateur content giving successful RUclipsrs advise as if they needed it.
Most people are still using Forge. I know this Comfy stuff makes some of you guys feel l33t0 but it's not practical to most users. When I come home from work I don't want to fiddle with wires and nodes. I just want to make cool images, sip a beer, and listen to the tunes.. The first person to make this workflow stuff practical will win the interface wars.
Why would you assume that people using a piece of software feel superior? Maybe people using ComfyUI just prefer it for practical reasons, just like it's practical for you to use Forge.
@@AB-wf8ekNo need to get caught in the weeds. I think what he's saying is that ultimately Comfy will be limited to a small group of people and won't be adopted by the mainstream due to it being impractical for most users.
@dogondeity Node editing is in everything from Blender to Houdini. Even in video editing, there's Nuke and Resolve. Most people don't use those software, but it doesn't mean node editing isn't practical. What you're talking about is popularity. Most people use Adobe.
People dont use ComfyUI to look "cool", they use it because you can build yourself everything like in a Lego-Set. And once your have build something, you can use it everytime you want, you dont have to rebuild it everytime you start it. You can even use premade templates like from Oliver, so you dont even have to do it yourself. I guess, you have a mental blockade here. Because once you get used to it you realize you can make images much easier, much faster, get the results you want much easier and for this have way more time having a drink and listen to music.
Comfi UI is the reason i stop watching ur video its so unproductive way UI when i try to do simple thing with as same setting as SD. Sd UI perform better by lot
Why do the two anti-comfy comments sound like a bot developed dislexia I feel you, I’d also be mentally challenged with nodes if forming basic English sentences were already that hard of a task for me
@@149315Nico comfi UI is simply unproductive user interface which have zero benefits in any type of work. unfinished unfriendly UI do exact same thing as SD but waste more time instead
Love your content Olivio, but I have to say for all the hype I just don't like the Flux outputs, ever. They always look to plastic and polished. The Pro version is much better but not open source so useless imo
You can get some pretty amazing results using different models and LORAs. I've even been able to get interesting effects like lens smears, light leaks, motion blurs, and film grain. I've also found Fluxtapose can unlock a lot of creative abilities, and these new controlnet/redux models only expand the possibilities.
@@AB-wf8ek those all sound promising but the base flux model (its less noticable in the pro version) all has 'that face' especially for female faces. That high cheekbone, cleft chin, tiny pixie nose look. I think the base model is overtrained on a particular dataset that has that look, its almost impossible to remove. The Pro version is better but like I say not open source
@Lost_In_Entropy I don't know, see plenty of examples of people generating all kinds of stuff with LORAs. I've also been able to get very unique results with my own workflows. Maybe try opening your horizon with the kind of images you're trying to generate. Either way, you're welcome to your opinion, just wanted to state mine.
Our brains are wired to process and evaluate new information constantly. When we encounter something unfamiliar, the very act of parsing and understanding it engages our neural pathways and strengthens our cognitive abilities. By demanding that everything be pre-digested with an introduction, we're actually robbing ourselves of valuable learning opportunities. Think about how children learn - they dive into new experiences without needing a roadmap. They explore, question, and piece things together naturally. This capacity doesn't disappear in adulthood; we just sometimes forget to embrace it. Our pattern-recognition abilities and contextual understanding allow us to quickly determine relevance and value, even without explicit signposting. Rather than needing everything spoon-fed to us, we should trust in our natural ability to learn, adapt, and make connections. This is how we've evolved as a species, and it's how we continue to grow as individuals.
It's in the title. The video is about using Flux Redux. He even explains in the first sentence that the video is about a new Flux Redux Advanced node. The amazing thing about watching videos online these days is that you can easily skip ahead to quickly see what a video is about.
The workflow is free on the Github page - As I said in the video! I made some modifications to it and that is a reward for my supporters. Maybe think about supporting the people who make these tutorials for you. How about that?
weird, cannot download FLUX.1-Redux-dev.safetensors from the Black Forest link you shared. (says 'file not available) Can't find anywher else either. Any suggestions Olivio?
Ugh. ComfyUI. I'm not saying it's too difficult to use, I'm just saying it's 99.8% unnecessary for making great stuff. I've seen a lot of comparisons, and I've seen a lot of things that many people have said they can't figure out, but I've stumbled across these little hiccups and have been able to fix them every time with just simple adjustments in Forge. Some things just don't need to be overbuilt.
ComfyUI's node-based approach isn't just complexity for complexity's sake - it offers crucial advantages for serious AI image generation work. While Automatic1111/Forge are excellent for straightforward workflows, ComfyUI's granular control allows for techniques that simply aren't possible in simpler interfaces. The visual workflow also makes it much easier to experiment, troubleshoot, and share exact generation settings with others. Rather than having to document a complex series of steps, you can simply share a workflow file that captures your entire process. This is invaluable for collaboration and learning from others in the community. Yes, there's a steeper learning curve, but that complexity serves a purpose. It's similar to how Photoshop offers vastly more capabilities than simpler photo editors, even though many users might only need basic features. For those who want to push the boundaries of what's possible with AI image generation, ComfyUI's power and flexibility are absolutely worth the investment in learning it. The beauty of having multiple tools is that users can choose what works best for their needs. While simpler interfaces are perfect for many users, dismissing ComfyUI's advantages overlooks its genuine value for more advanced use cases.
If you like forge, why don't you look for some forge videos on RUclips? Some people like Comfy, and it seems like you're saying Comfy videos shouldn't exist because you personally don't like Comfy. If I liked basketball, I wouldn't look up hockey videos on RUclips and complain that they weren't talking about basketball...
@generichuman_ probably because the video title has nothing stating it is about comfy...... I make images better than 99% of what I see all over civitai every day,,, with forge. Fast. Comfy is for sure overhyped and too busy for no reason. I've yet to see one legitimate comparison I could say comes close to anything I've ever made. But hey go be comfy with comfy. You do you. I wouldn't be here though if the title of the video was properly labeled. Stopped at the first mention of it to comment and leave. Aka, dropped payload and rolled out.
#### Links from my Video ####
Get my SHIRT: www.qwertee.com/ with Code "Olivio"
Get my WORKFLOW: www.patreon.com/posts/redux-advanced-116592360
github.com/kaibioinfo/ComfyUI_AdvancedRefluxControl
👋 hi
Hi Olivio, thanks for node suggestion, I was trying to figure out how to use the prompt with redux, this helps 😊
Now this is it, Olivio came back with detailed video on new stuff
The Redux model is certainly welcome, and Black Forest is amazing; that said, we still have a way to go before we are back at IP Adapter levels of control.
As one might expect based on the nature of the model and the workflows, Redux on its own doesn't really differentiate between or understand content vs style. The generation will always be influenced by both, and until someone starts to figure out which blocks are which (if this even works for Flux), it's going to be a bit of a blunt force tool.
Yea, I think Flux just takes a different approach. I was actually able to get some cool results unsampling multiple images with Fluxtapose and then masking the latents together to create melded compositions.
I think it's an unfortunate product of the marketing, but so many people come to these tools thinking that it doesn't take creative imagination and expect everything spoon fed to them on a silver platter.
thanks for those advancedrefluxcontrol nodes.
I tried to do something with a node named Style model apply advanced (not the advancedrefluxcontrol ones) but , even if some of the results were very nice, it was very difficult to use (only very low strength around 0.1 made the prompt be important).
it seems quite similar to just interrogate images and mix the resulting prompts and maybe even mix some control depth if you really want similar composition. But this is still a cool approach
Feels like a react discussion 8 years ago
Thank you so much for all your informative videos.
Could you please provide a way in which we can change the style of image without changing its composition, face, look.
conversion style would be vector illustration, anime, watercolor painting.
I tried lot of workflows. each one have minor defects like color change, face change, look change.
Pix2Pix, fluxtopaz, and many.
Could you please provide any suggestions.
hi! what do you use for the cpu and gpu usage on the right?
This worked yesterday fine. Today after adding these new Advanced Nodes all I get is gradients? I had already taken the original simple Nodes and Doubled everything and had it all working. Now today whether I use my setup or your setup all I get is gradients. I even updated the ComfyUI and everything is gradients. I took pngs I made yesterday with my dual setup and instead of the proper images I get gradients only.
Basically, Redux can be considered the Flux counterpart to Omnigen, even if a bit less flexible, is it?
not really, because you need a ton of different models to do all that, while Omnigen has the main purpose to do all that in one model
Doesn’t ‘professional RAW’ photo create a flat image that needs colour grading like a normal RAW photo would?
you would think so, but no. it creates a graded image
No, not for photos. This only applies to Videos (raw log format). A raw image looks (by default) just like the same image as jpg.
How do I setup the prompt ? it seems the conditioning in my example workflow is not connected
Not an enjoyer of flux but nice job!
From bottom of my heart i hate comfy ui.
+1
Skill issue
xD
lol I wouldn’t mind a better front end. But I use it thru Stability AI which is a nice mixed interface
@@shaiona UI issue
It's time for you to understand, people. Its not for low ram anymore.
@@сверхчеловечек rent a online docker. It costs like a couple of cents
@@OlivioSarikas what about privacy? your image/video won't be just only yours, in fact?
@@сверхчеловечек you rent a online server that is deleted after you close it. everything you do on there is you private matter
@@OlivioSarikas Respectfully that isn't true. Never assume its private if you are not local.
@@TyreII i didn't say it's private, i said it's your private matter. meaning, they don't check your prompts for what you create. There is no censorship, because you litereally rent a server and run your own comfyui. however server rules may apply in the tos, even if they don't check what you generate
Hi. Is there online with huggingface? What is it called?
does this work with gguf yet?
spagetti man
GJ!
Thanks Olivio.
Sell your armpit hair and buy a 12GB or 16GB BPU. If you think AI is VRAM greedy now wait till next year.
Anyone knows how to fix the output, when i use 9:16 image, it crops it to 1:1
Krass. 😱
What is the absolutely fastest flux model right now?
shuttle-3-diffusion - 4 steps
I see it's a text-to-image AI model. Fastest image to image model?
@@BrianZajac Any model is image to image. You just encode the input image into a latent image and feed that into the sampler to generate from instead of an empty latent.
I keep getting the shapes cannot be multiplied error for some reason and I don't know why. I have everything set up properly.
did you download the image and drag it into comfy for the workflow? make sure your comfy is updated. make sure your flux and clip loader are set to the correct models and modes
had the same until i noticed that i was loading a wrong clip vision by default when i was loading the second and third image, hope it helps
anyone know how to use flux with low spec pc. my spec 8gb gpu 24gb ram
Yes you can just get 8steps flux in CivitaAI you can easily run it
Try looking up Flux Dev NF4, which is a quantized model that claims to run on as little as 6GB. I can't vouch for it though, since I haven't tried it myself.
@@AB-wf8ek thanks i will look at it..
No
Did you ever thought about the person who only has 6 or 8 g vram? This Redux is so slow in low vram, and you acting like low vram can run it on comfyui easily. Please add a note to tell the pepole low vram can't play with it!!!
@@kittendumb4128 works fine on my 3060Ti 8gb vram with 32gb system ram. Generates the same as a regular flux image.
actually, as a side-effect the plugin will also lower vram consumption. As lower the strength of the image, as lower the vram consumption.
Did you ever think that if you want to do cool stuff on the cutting edge, you need to invest in your equipment? Not everything in life is free... It's not anyone else's fault that you have a potato for a computer.
Eh... At 3:19 I highly doubt you achieve this with the help of Redux only man. when I input the prompt and I can get pretty much the same effect just use Flux native model! Because your prompt have already done the 99.999% of the work!!!! Hence No need the redux model at all? You should post a before & after
Otherwise nice video Oli!
@@CharlesLijt you seem to miss the point here. Redux is image guidance. It's not about achieving more, it's about getting similar results to the image input
@@OlivioSarikas yeah, it seems that redux helps guide the fine details / transform styles.
You are amazing at finding things that are trash and making them sound exciting.
Calling Flux Redux ‘trash’ is such an overreach. This tool is state-of-the-art for generating high-quality image variations and restyling with flux. Sure, no tool is perfect, but dismissing something without pointing out specific flaws feels more like a hot take than a genuine critique.
@@Darkwing8707 It's trash. You can champion trash if you want to.
If you have such incredible insight, why don't you use it to get more than a few hundred views on your RUclips channel? It always seem to be people with incredibly amateur content giving successful RUclipsrs advise as if they needed it.
Most people are still using Forge. I know this Comfy stuff makes some of you guys feel l33t0 but it's not practical to most users. When I come home from work I don't want to fiddle with wires and nodes. I just want to make cool images, sip a beer, and listen to the tunes.. The first person to make this workflow stuff practical will win the interface wars.
Why would you assume that people using a piece of software feel superior?
Maybe people using ComfyUI just prefer it for practical reasons, just like it's practical for you to use Forge.
@@AB-wf8ekNo need to get caught in the weeds. I think what he's saying is that ultimately Comfy will be limited to a small group of people and won't be adopted by the mainstream due to it being impractical for most users.
Yeah noone wants struggle to make pictures of cats after a long day. We want our cats now.
@dogondeity Node editing is in everything from Blender to Houdini. Even in video editing, there's Nuke and Resolve. Most people don't use those software, but it doesn't mean node editing isn't practical.
What you're talking about is popularity. Most people use Adobe.
People dont use ComfyUI to look "cool", they use it because you can build yourself everything like in a Lego-Set. And once your have build something, you can use it everytime you want, you dont have to rebuild it everytime you start it. You can even use premade templates like from Oliver, so you dont even have to do it yourself.
I guess, you have a mental blockade here. Because once you get used to it you realize you can make images much easier, much faster, get the results you want much easier and for this have way more time having a drink and listen to music.
Comfi UI is the reason i stop watching ur video its so unproductive way UI when i try to do simple thing with as same setting as SD. Sd UI perform better by lot
Why do the two anti-comfy comments sound like a bot developed dislexia
I feel you, I’d also be mentally challenged with nodes if forming basic English sentences were already that hard of a task for me
@@149315Nico comfi UI is simply unproductive user interface which have zero benefits in any type of work. unfinished unfriendly UI do exact same thing as SD but waste more time instead
Oh, so you stopped watching his videos... That's why you're here right now commenting is it? Nobody cares what you watch or don't watch on RUclips.
Love your content Olivio, but I have to say for all the hype I just don't like the Flux outputs, ever. They always look to plastic and polished. The Pro version is much better but not open source so useless imo
You can get some pretty amazing results using different models and LORAs. I've even been able to get interesting effects like lens smears, light leaks, motion blurs, and film grain.
I've also found Fluxtapose can unlock a lot of creative abilities, and these new controlnet/redux models only expand the possibilities.
@@AB-wf8ek those all sound promising but the base flux model (its less noticable in the pro version) all has 'that face' especially for female faces. That high cheekbone, cleft chin, tiny pixie nose look. I think the base model is overtrained on a particular dataset that has that look, its almost impossible to remove. The Pro version is better but like I say not open source
@Lost_In_Entropy I don't know, see plenty of examples of people generating all kinds of stuff with LORAs. I've also been able to get very unique results with my own workflows. Maybe try opening your horizon with the kind of images you're trying to generate.
Either way, you're welcome to your opinion, just wanted to state mine.
Intro sentence for noobs would not hurt. To know if we are interested before we spend time.
Our brains are wired to process and evaluate new information constantly. When we encounter something unfamiliar, the very act of parsing and understanding it engages our neural pathways and strengthens our cognitive abilities. By demanding that everything be pre-digested with an introduction, we're actually robbing ourselves of valuable learning opportunities.
Think about how children learn - they dive into new experiences without needing a roadmap. They explore, question, and piece things together naturally. This capacity doesn't disappear in adulthood; we just sometimes forget to embrace it. Our pattern-recognition abilities and contextual understanding allow us to quickly determine relevance and value, even without explicit signposting.
Rather than needing everything spoon-fed to us, we should trust in our natural ability to learn, adapt, and make connections. This is how we've evolved as a species, and it's how we continue to grow as individuals.
It's in the title. The video is about using Flux Redux.
He even explains in the first sentence that the video is about a new Flux Redux Advanced node.
The amazing thing about watching videos online these days is that you can easily skip ahead to quickly see what a video is about.
@@AB-wf8ek Be shocked as much as you want, but some of us have no clue what Flux Redux is.
You were kidding?
Paid workflow sorry !!!
The workflow is free on the Github page - As I said in the video! I made some modifications to it and that is a reward for my supporters. Maybe think about supporting the people who make these tutorials for you. How about that?
weird, cannot download FLUX.1-Redux-dev.safetensors from the Black Forest link you shared. (says 'file not available) Can't find anywher else either. Any suggestions Olivio?
If it’s on Hugging Face you might need to accept access on the model card page first. It’s also on CivitAI.
same...
Non possum vel non download.
It’s on CivitAI.
You have to agree to the provisions before you press download!
Ugh. ComfyUI. I'm not saying it's too difficult to use, I'm just saying it's 99.8% unnecessary for making great stuff.
I've seen a lot of comparisons, and I've seen a lot of things that many people have said they can't figure out, but I've stumbled across these little hiccups and have been able to fix them every time with just simple adjustments in Forge. Some things just don't need to be overbuilt.
ComfyUI's node-based approach isn't just complexity for complexity's sake - it offers crucial advantages for serious AI image generation work. While Automatic1111/Forge are excellent for straightforward workflows, ComfyUI's granular control allows for techniques that simply aren't possible in simpler interfaces.
The visual workflow also makes it much easier to experiment, troubleshoot, and share exact generation settings with others. Rather than having to document a complex series of steps, you can simply share a workflow file that captures your entire process. This is invaluable for collaboration and learning from others in the community.
Yes, there's a steeper learning curve, but that complexity serves a purpose. It's similar to how Photoshop offers vastly more capabilities than simpler photo editors, even though many users might only need basic features. For those who want to push the boundaries of what's possible with AI image generation, ComfyUI's power and flexibility are absolutely worth the investment in learning it.
The beauty of having multiple tools is that users can choose what works best for their needs. While simpler interfaces are perfect for many users, dismissing ComfyUI's advantages overlooks its genuine value for more advanced use cases.
Automating tasks once > doing stuff manually everytime again
@149315Nico that's the true forge way
If you like forge, why don't you look for some forge videos on RUclips? Some people like Comfy, and it seems like you're saying Comfy videos shouldn't exist because you personally don't like Comfy. If I liked basketball, I wouldn't look up hockey videos on RUclips and complain that they weren't talking about basketball...
@generichuman_ probably because the video title has nothing stating it is about comfy......
I make images better than 99% of what I see all over civitai every day,,, with forge. Fast. Comfy is for sure overhyped and too busy for no reason. I've yet to see one legitimate comparison I could say comes close to anything I've ever made.
But hey go be comfy with comfy. You do you. I wouldn't be here though if the title of the video was properly labeled. Stopped at the first mention of it to comment and leave. Aka, dropped payload and rolled out.