Want STUNNING Results? Flux Redux Makes It Happen!

Поделиться
HTML-код
  • Опубликовано: 26 ноя 2024

Комментарии • 39

  • @MonzonMedia
    @MonzonMedia  День назад +1

    **Update** Note the StyleModelApplyAdvanced node at 10:06 has now been changed to ReduxAdvanced. Smalll changes to the node but works the same way. Thanks to @marchihuppi for pointing out that the previous one was no longer there.

  • @crippsverse
    @crippsverse 3 дня назад +6

    I like your instructions. I can actually follow them!

    • @MonzonMedia
      @MonzonMedia  3 дня назад

      I appreciate that, means a lot to me! 🙏🙌

  • @MonzonMedia
    @MonzonMedia  3 дня назад +5

    **Note** Hello peeps! Just a heads up this isn't a beginner tutorial, you need to know some basics on comfyui and some understanding on node set up. Also this video and the latest one on Flux in-outpainting was done on an NVidia 3060Ti 8GB vram and 32GB of system RAM. Flux runs fine on my system with Comfyui and Forge, it's not the fastest but It works at acceptable speeds!
    For those of you who have tried Flux Redux, what has been your experience?

    • @gamersgabangest3179
      @gamersgabangest3179 3 дня назад +1

      Image generation with Flux redux is slightly slower than the basic workflow + LoRA image generation: 1.52s/it vs 1.2s/it. (NVIDIA RTX 4070 ti super 16Gb)

    • @rogersnelson7483
      @rogersnelson7483 3 дня назад +1

      @@gamersgabangest3179 Wow, your card runs 100x faster than my 8Gig card. A single basic flux image with no Loras or extra nodes takes 1.5 minutes sometimes even 7 minutes.
      That's why I'm hesitant to try these new Flux tools. I don't want to wait over 10 minutes for a single image.

    • @MonzonMedia
      @MonzonMedia  2 дня назад

      @@gamersgabangest3179 That's expected since the pipeline is slightly different. Still very usable though.

    • @MonzonMedia
      @MonzonMedia  2 дня назад

      @@rogersnelson7483 Curious to why that is? Which card do you have? I'm using a Nvidia MSI 3060Ti 8GB vram, 32 GB system ram. I also load all my platforms on an internal M.2 drive.

  • @b.s.design
    @b.s.design 3 дня назад +1

    That's a great lesson! Thank you so much!

    • @MonzonMedia
      @MonzonMedia  3 дня назад +1

      You're very welcome! Appreciate the support!

  • @op12studio
    @op12studio 2 дня назад +1

    You can also hold down the ALT key and click and drag a node to duplicate a node. IMO it's a little easier.

  • @vVinchi
    @vVinchi 3 дня назад +1

    Thx for keeping us up to date with this stuff

    • @MonzonMedia
      @MonzonMedia  3 дня назад

      Always great to hear from you man! BTW DM me on Discord when you get a chance! 👊🙌

  • @leonv_photographySG
    @leonv_photographySG 3 дня назад +1

    Hi for multiple images, will you suggest using the style model apply simple example?

    • @MonzonMedia
      @MonzonMedia  3 дня назад

      Yes but the advance one gives you more control. But even the first workflow I show works just as well you just have to duplicate the load image nodes and anything else associated with it.

  • @SouthbayCreations
    @SouthbayCreations 3 дня назад +1

    Great video buddy thanks for sharing.

    • @MonzonMedia
      @MonzonMedia  3 дня назад +1

      You bet! Great to see BFL still working on new stuff! 👍🏼

  • @baheth3elmy16
    @baheth3elmy16 3 дня назад +1

    Thanks, nice video!

    • @MonzonMedia
      @MonzonMedia  3 дня назад

      Thank you too! Appreciate it. 👍🏼

  • @marcihuppi
    @marcihuppi День назад +1

    i installed the reflux node pack, but it doesn't come with the style model apply advance (beta) node... any ideas?

    • @MonzonMedia
      @MonzonMedia  День назад

      Looks like they changed it to "ReduxAdvanced and made some adjustments to the node. Still works the same way.

  • @Maylin-ze6qx
    @Maylin-ze6qx День назад +1

    ❤❤❤❤

  • @amatrixa2923
    @amatrixa2923 2 дня назад +1

    I think you skipped a procedure when you went from adding the conditioning nodes, then you changed to your neater layout (4:35) and generating the cyberpunk car. Once you went to the car, what happened to the image prompt showing the image? It's gone, there's no instructions on what to do with that part before proceeding to generating the car (5:02). If I don't do something to the image node it keeps using the image and not generate the car.

    • @MonzonMedia
      @MonzonMedia  2 дня назад

      Not sure what you mean from 4:35 to 6:22 explains all that. Set conditioning average for the strength of the reference image, set the ConditioningTimeStepRange (Style) and Set ConditioningTimeStepRange (Prompt) based on the results you want and that's it.

    • @amatrixa2923
      @amatrixa2923 2 дня назад

      @@MonzonMedia At 4:35, once you finished showing how to add the two conditioning nodes and their values, the video switches to the workflow with straight lines, which didn't explain going from one to the other. We went from a workflow using an input image containing the bear to no image node to generating the car with a different more condensed workflow but not saying what happened to the input image node and its nodes. Example at 1:22 is one workflow, 5:00 is another, I'm asking how did we go from one to the other.

    • @MonzonMedia
      @MonzonMedia  2 дня назад

      @@amatrixa2923 You are missing the context, just ignore the workflow from 5:00, I was just making a point that if you generate that prompt in 4:57 on it's own the car would look something like that image.

  • @bobobaba2080
    @bobobaba2080 3 дня назад +1

    When I use flux redux, picture of a person and a location, like a living room. the result is the location shoved in to the person like a texture, instead of locating the person in to the location. haven't figured it out yet.

    • @MonzonMedia
      @MonzonMedia  3 дня назад

      Try the various methods shown where you can adjust the settings.

  • @TheSeniorzone
    @TheSeniorzone 3 дня назад

    Thanks for your video 👍 Unfortunately I get a runtime error at #38. Actually it should work with 32GB RAM and 16GB GPU.

    • @TheSeniorzone
      @TheSeniorzone 3 дня назад

      I have now tried every Siglip version. Same error for everything: size mismatch for vision_model.embeddings.patch_embedding.weight. I have no idea what's wrong with my ComFyUi. Well, there is also life without Redux :)

    • @MonzonMedia
      @MonzonMedia  3 дня назад +1

      Update comfyui either through the manager or manually.

  • @borjanpeovski7615
    @borjanpeovski7615 3 дня назад +1

    Trying the multiple image workflow but keep getting bad results. It seems to just overlay the images and blend them together instead of mixing elements from both of them.

    • @MonzonMedia
      @MonzonMedia  3 дня назад

      I'd try using the last method with the StyleModelApplySimple node or the advanced one. Much better results. Or you can use the one I did and add the ConditioningTimeStepRange nodes.

  • @contrarian8870
    @contrarian8870 2 дня назад +1

    Wait, so the "AdvancedReflux" node you start doing at @7:20 supersedes what you'd done before that timestamp?

    • @MonzonMedia
      @MonzonMedia  2 дня назад +2

      Not necessarily, just different ways to do it. The earlier method was an existing workflow I've used for similar workflows beyond Flux (like IP adpater for SDXL) so you can use it for other models as well beyond Flux. The AdvancedReflux node is specifically for Flux. Great question. 👍

  • @thewebstylist
    @thewebstylist 3 дня назад

    Ahhhhhgh ya lost me bro on all local installs 😢

    • @MonzonMedia
      @MonzonMedia  3 дня назад

      Not sure what you are talking about?

    • @crippsverse
      @crippsverse 3 дня назад

      ComfyUI requires a lot of concentration to get up and running