**Update** Note the StyleModelApplyAdvanced node at 10:06 has now been changed to ReduxAdvanced. Smalll changes to the node but works the same way. Thanks to @marchihuppi for pointing out that the previous one was no longer there.
**Note** Hello peeps! Just a heads up this isn't a beginner tutorial, you need to know some basics on comfyui and some understanding on node set up. Also this video and the latest one on Flux in-outpainting was done on an NVidia 3060Ti 8GB vram and 32GB of system RAM. Flux runs fine on my system with Comfyui and Forge, it's not the fastest but It works at acceptable speeds! For those of you who have tried Flux Redux, what has been your experience?
Image generation with Flux redux is slightly slower than the basic workflow + LoRA image generation: 1.52s/it vs 1.2s/it. (NVIDIA RTX 4070 ti super 16Gb)
@@gamersgabangest3179 Wow, your card runs 100x faster than my 8Gig card. A single basic flux image with no Loras or extra nodes takes 1.5 minutes sometimes even 7 minutes. That's why I'm hesitant to try these new Flux tools. I don't want to wait over 10 minutes for a single image.
@@rogersnelson7483 Curious to why that is? Which card do you have? I'm using a Nvidia MSI 3060Ti 8GB vram, 32 GB system ram. I also load all my platforms on an internal M.2 drive.
Yes but the advance one gives you more control. But even the first workflow I show works just as well you just have to duplicate the load image nodes and anything else associated with it.
I think you skipped a procedure when you went from adding the conditioning nodes, then you changed to your neater layout (4:35) and generating the cyberpunk car. Once you went to the car, what happened to the image prompt showing the image? It's gone, there's no instructions on what to do with that part before proceeding to generating the car (5:02). If I don't do something to the image node it keeps using the image and not generate the car.
Not sure what you mean from 4:35 to 6:22 explains all that. Set conditioning average for the strength of the reference image, set the ConditioningTimeStepRange (Style) and Set ConditioningTimeStepRange (Prompt) based on the results you want and that's it.
@@MonzonMedia At 4:35, once you finished showing how to add the two conditioning nodes and their values, the video switches to the workflow with straight lines, which didn't explain going from one to the other. We went from a workflow using an input image containing the bear to no image node to generating the car with a different more condensed workflow but not saying what happened to the input image node and its nodes. Example at 1:22 is one workflow, 5:00 is another, I'm asking how did we go from one to the other.
@@amatrixa2923 You are missing the context, just ignore the workflow from 5:00, I was just making a point that if you generate that prompt in 4:57 on it's own the car would look something like that image.
When I use flux redux, picture of a person and a location, like a living room. the result is the location shoved in to the person like a texture, instead of locating the person in to the location. haven't figured it out yet.
I have now tried every Siglip version. Same error for everything: size mismatch for vision_model.embeddings.patch_embedding.weight. I have no idea what's wrong with my ComFyUi. Well, there is also life without Redux :)
Trying the multiple image workflow but keep getting bad results. It seems to just overlay the images and blend them together instead of mixing elements from both of them.
I'd try using the last method with the StyleModelApplySimple node or the advanced one. Much better results. Or you can use the one I did and add the ConditioningTimeStepRange nodes.
Not necessarily, just different ways to do it. The earlier method was an existing workflow I've used for similar workflows beyond Flux (like IP adpater for SDXL) so you can use it for other models as well beyond Flux. The AdvancedReflux node is specifically for Flux. Great question. 👍
**Update** Note the StyleModelApplyAdvanced node at 10:06 has now been changed to ReduxAdvanced. Smalll changes to the node but works the same way. Thanks to @marchihuppi for pointing out that the previous one was no longer there.
I like your instructions. I can actually follow them!
I appreciate that, means a lot to me! 🙏🙌
**Note** Hello peeps! Just a heads up this isn't a beginner tutorial, you need to know some basics on comfyui and some understanding on node set up. Also this video and the latest one on Flux in-outpainting was done on an NVidia 3060Ti 8GB vram and 32GB of system RAM. Flux runs fine on my system with Comfyui and Forge, it's not the fastest but It works at acceptable speeds!
For those of you who have tried Flux Redux, what has been your experience?
Image generation with Flux redux is slightly slower than the basic workflow + LoRA image generation: 1.52s/it vs 1.2s/it. (NVIDIA RTX 4070 ti super 16Gb)
@@gamersgabangest3179 Wow, your card runs 100x faster than my 8Gig card. A single basic flux image with no Loras or extra nodes takes 1.5 minutes sometimes even 7 minutes.
That's why I'm hesitant to try these new Flux tools. I don't want to wait over 10 minutes for a single image.
@@gamersgabangest3179 That's expected since the pipeline is slightly different. Still very usable though.
@@rogersnelson7483 Curious to why that is? Which card do you have? I'm using a Nvidia MSI 3060Ti 8GB vram, 32 GB system ram. I also load all my platforms on an internal M.2 drive.
That's a great lesson! Thank you so much!
You're very welcome! Appreciate the support!
You can also hold down the ALT key and click and drag a node to duplicate a node. IMO it's a little easier.
Indeed!
Thx for keeping us up to date with this stuff
Always great to hear from you man! BTW DM me on Discord when you get a chance! 👊🙌
Hi for multiple images, will you suggest using the style model apply simple example?
Yes but the advance one gives you more control. But even the first workflow I show works just as well you just have to duplicate the load image nodes and anything else associated with it.
Great video buddy thanks for sharing.
You bet! Great to see BFL still working on new stuff! 👍🏼
Thanks, nice video!
Thank you too! Appreciate it. 👍🏼
i installed the reflux node pack, but it doesn't come with the style model apply advance (beta) node... any ideas?
Looks like they changed it to "ReduxAdvanced and made some adjustments to the node. Still works the same way.
❤❤❤❤
I think you skipped a procedure when you went from adding the conditioning nodes, then you changed to your neater layout (4:35) and generating the cyberpunk car. Once you went to the car, what happened to the image prompt showing the image? It's gone, there's no instructions on what to do with that part before proceeding to generating the car (5:02). If I don't do something to the image node it keeps using the image and not generate the car.
Not sure what you mean from 4:35 to 6:22 explains all that. Set conditioning average for the strength of the reference image, set the ConditioningTimeStepRange (Style) and Set ConditioningTimeStepRange (Prompt) based on the results you want and that's it.
@@MonzonMedia At 4:35, once you finished showing how to add the two conditioning nodes and their values, the video switches to the workflow with straight lines, which didn't explain going from one to the other. We went from a workflow using an input image containing the bear to no image node to generating the car with a different more condensed workflow but not saying what happened to the input image node and its nodes. Example at 1:22 is one workflow, 5:00 is another, I'm asking how did we go from one to the other.
@@amatrixa2923 You are missing the context, just ignore the workflow from 5:00, I was just making a point that if you generate that prompt in 4:57 on it's own the car would look something like that image.
When I use flux redux, picture of a person and a location, like a living room. the result is the location shoved in to the person like a texture, instead of locating the person in to the location. haven't figured it out yet.
Try the various methods shown where you can adjust the settings.
Thanks for your video 👍 Unfortunately I get a runtime error at #38. Actually it should work with 32GB RAM and 16GB GPU.
I have now tried every Siglip version. Same error for everything: size mismatch for vision_model.embeddings.patch_embedding.weight. I have no idea what's wrong with my ComFyUi. Well, there is also life without Redux :)
Update comfyui either through the manager or manually.
Trying the multiple image workflow but keep getting bad results. It seems to just overlay the images and blend them together instead of mixing elements from both of them.
I'd try using the last method with the StyleModelApplySimple node or the advanced one. Much better results. Or you can use the one I did and add the ConditioningTimeStepRange nodes.
Wait, so the "AdvancedReflux" node you start doing at @7:20 supersedes what you'd done before that timestamp?
Not necessarily, just different ways to do it. The earlier method was an existing workflow I've used for similar workflows beyond Flux (like IP adpater for SDXL) so you can use it for other models as well beyond Flux. The AdvancedReflux node is specifically for Flux. Great question. 👍
Ahhhhhgh ya lost me bro on all local installs 😢
Not sure what you are talking about?
ComfyUI requires a lot of concentration to get up and running