Flux Tools For Low VRAM GPU's | Introduction to Inpainting & Outpainting

Поделиться
HTML-код
  • Опубликовано: 27 ноя 2024

Комментарии • 84

  • @MonzonMedia
    @MonzonMedia  6 дней назад +9

    By the way, there is Flux Tools support for SwamUI and SDNext. Fingers crossed that Forge adds this when the updates get done soon!

    • @havemoney
      @havemoney 6 дней назад

      This is what JackDainzh answered, I don’t know who he is.
      It's not a controlnet model, it's an entirely separate model that is designed to be used as a model to generate, not to guide. The issue is that, as of now, currently with img2img implementation of flux, there is no way to guide the model, say, with pix2pix intrsuct's Image CFG Scale slider, because it doesn't effect anything at the moment. (I forced it visible in the ui, when using flux), but because flux's conditioning is different to that of regular sd models, it get's skipped.
      To implement the guidence from img2img tab, is to rewrite the whole backend engine, which I have no idea how long will it take, maybe months, or maybe 1 day.

  • @onurerdagli
    @onurerdagli 5 дней назад +1

    Thank you, I just tried outpaint and inpaint truly amazing quality

    • @MonzonMedia
      @MonzonMedia  5 дней назад

      Indeed! So far I haven't seen any major issues yet but still testing. Impressive so far, especially with outpainting. 👍

  • @SouthbayJay_com
    @SouthbayJay_com 5 дней назад +1

    These are so cool! Can't wait to dive into these! Thanks for sharing the info! 🙌🙌

    • @MonzonMedia
      @MonzonMedia  5 дней назад

      @@SouthbayJay_com appreciate it bro! Have fun! 🙌🏼

  • @Elwaves2925
    @Elwaves2925 5 дней назад +1

    I finally got to do the inpainting I needed from Flux. On 12Gb VRAM with the full 'fill' model it was a lot quicker than I expected. That's the only one I've tried so far but with how well it worked I'm looking forward to the others.

    • @MonzonMedia
      @MonzonMedia  5 дней назад +1

      @@Elwaves2925 good to know it can run on 12gb vram. Have you tried the FP8 and if you do is it any faster?

    • @Elwaves2925
      @Elwaves2925 5 дней назад +1

      @@MonzonMedia Haven't had chance to try the fp8 version as I didn't know it existed until your video. I will be trying it later.

  • @LucaSerafiniLukeZerfini
    @LucaSerafiniLukeZerfini 5 дней назад +1

    Can't wait. I found flux less effective in design rather than SDXL.

  • @AimanFatani
    @AimanFatani 5 дней назад +1

    Been waiting for such things .. thanks for sharing ❤❤

  • @contrarian8870
    @contrarian8870 6 дней назад +3

    Thanks. Do the Redux next, then Depth, then Canny

    • @MonzonMedia
      @MonzonMedia  6 дней назад +2

      Welcome! Redux is pretty cool! Will likely do it next, then combine the 2 controlnets in another video.

  • @skrotov
    @skrotov 4 дня назад +1

    great, thanks. by the way you can hide noodles just pressing eye icon on the down right side of your screen

    • @MonzonMedia
      @MonzonMedia  4 дня назад +1

      Indeed! I do like to use the straight ones though but switch to spline when I need to remember where everything is connected. 😊 👍

  • @TheColonelJJ
    @TheColonelJJ 4 дня назад +1

    As always, your videos are a welcome view. Favor to ask. As things come so fast to Comfy could you add a sound bite when things aren't quite ready for Forge?

    • @MonzonMedia
      @MonzonMedia  4 дня назад

      Yeah I normally do but forgot this time although I did post a pinned comment that there is support for other platforms like SDNext and SwarmUI.

  • @havemoney
    @havemoney 6 дней назад +8

    We are waiting for lllyasviel to attach it to Forge

    • @MonzonMedia
      @MonzonMedia  6 дней назад +2

      🙏😊 I do see some action on the github page and no other "delays" posted. Fingers crossed my friend!

    • @mik3lang3lo
      @mik3lang3lo 5 дней назад +1

      ❤ we are all waiting ❤

  • @FranktheSquirell
    @FranktheSquirell 5 дней назад +1

    ya did it again , great job as usual 😊😊
    only trouble is i've been using foocus for in/out painting and now you've made me want to try it in comfyui grrrrr lol
    I gave up on comfy cause the update broke the LLM generator i was using in it , come to mention it i cant even remember the name of the generator now .. damn🤣🤣🤣🤣

    • @MonzonMedia
      @MonzonMedia  5 дней назад +1

      😊 It does take some time to get used to, I have a love/hate relationship with Comfyui hehehe. But it is worth knowing though especially since it get's all the latest features quickly. At the very least just learn how to use drag and drop workflows and install any missing nodes. That's pretty much all most people need to know.

  • @cekuhnen
    @cekuhnen 5 дней назад +1

    Redux will be fun for MJ to deal with.

    • @MonzonMedia
      @MonzonMedia  5 дней назад

      Hey my friend! Nice to see you here! I haven't used MJ in a while but there is a lot you can do locally compared to MJ's features, plus way more models to choose from. Hope all is well with you. 👍

    • @cekuhnen
      @cekuhnen 5 дней назад +1

      @@MonzonMedia My MJ sub will end this year and I wont go back. Vizcom became so powerful and Rubbrband also is shaping up really well.

  •  5 дней назад +1

    Thanks for the vid! And do you know if the flux.1-fill-dev (23Gb) version is an extended version of the original flux.1-dev? or a whole new thing, and you have to install both?

    • @MonzonMedia
      @MonzonMedia  4 дня назад +1

      Welcome! Typically in-outpaint models are just trained differently but should be based on the original model.

    •  4 дня назад +1

      @@MonzonMedia Got it! thanks!

  • @vVinchi
    @vVinchi 6 дней назад +1

    This will be a good series of videos

    • @MonzonMedia
      @MonzonMedia  5 дней назад

      Indeed! Already working on the next one. Good to hear from ya bud!

  • @Maylin-ze6qx
    @Maylin-ze6qx 5 дней назад +2

    ❤❤❤❤

  • @Scn64
    @Scn64 4 дня назад +1

    When painting the mask, what effect do the different colors (black, white, negative) and opacity have on the outcome? Does the resulting inpaint change at all depending on which color/opacity you choose?

    • @MonzonMedia
      @MonzonMedia  4 дня назад

      It's just for visual preference it has no effect on the outcome.

  • @LucaSerafiniLukeZerfini
    @LucaSerafiniLukeZerfini 5 дней назад

    Great to follow. Updated Comfy but returning this:
    RuntimeError: Error(s) in loading state_dict for Flux:
    size mismatch for img_in.weight: copying a param with shape torch.Size([3072, 384]) from checkpoint, the shape in current model is torch.Size([3072, 64]).
    by the way, depth and canny would be the best to see

    • @MonzonMedia
      @MonzonMedia  5 дней назад

      Context? What were you doing? What are your system specs?

    • @LucaSerafiniLukeZerfini
      @LucaSerafiniLukeZerfini 5 дней назад +1

      I managed to pull a comfyui update and now it works. Still having outlines visible on the outpainting. Thanks for the reply. I'm on windows and 4090 rtx.

    • @MonzonMedia
      @MonzonMedia  5 дней назад

      Cool! Yeah always update when new features come out. If you’re seeing seams when outpainting try increasing the feathering or do 1-2 sides at a time. Results can vary.

    • @LucaSerafiniLukeZerfini
      @LucaSerafiniLukeZerfini 5 дней назад +1

      Yes maybe side by side works better. Other point I'm trying to manage background switching for car but the results still awful with flux.

    • @MonzonMedia
      @MonzonMedia  5 дней назад

      @@LucaSerafiniLukeZerfini crop and stich inpaint node might be better for that but also the redux model can do it. I'll be posting a video on redux soon.

  • @FranktheSquirell
    @FranktheSquirell 4 дня назад +1

    me again lol 😊
    Have you tried the "DMD2" SDXL models yet? not that many about but wow are they impressive . prompt adherence is about the same as Flux schnell, but the image quality is really good, they say 4-8 steps, but a 12 step DMD2 image gives better results imo.
    Then again i am getting old now and me eyes aren't as good as they used to be .. that's my excuse 🤣🤣

    • @MonzonMedia
      @MonzonMedia  3 дня назад +1

      Not yet but I remember reading about it on Reddit. Thanks for the reminder!

  • @skrotov
    @skrotov 4 дня назад +1

    and what i don't like in this new fill model is that it seems works on the actual pixels without enlarging painted area as we did in automatic. As a result we have low detailed and crappy quality if masked object was not so big

    • @MonzonMedia
      @MonzonMedia  4 дня назад +1

      That has more to do with the platform you are using, for example fooocus and invoke ai has methods where when inpaint is used it generates the inpainted areas in it's native resolution. I can't recall on comfyui if there is a node that does that but I'm pretty sure there is. Might make a good video topic. 👍

  • @havemoney
    @havemoney 6 дней назад +2

    I'll go play Project Zomboid, I recommend it

    • @MonzonMedia
      @MonzonMedia  6 дней назад +1

      Ooohhh, will check it out! I finally played final fantasy vii remake! 😬😊 loved it!

  • @bause6182
    @bause6182 6 дней назад +3

    Thanks for the guide , it is possible to run redux with low vram ?

    • @MonzonMedia
      @MonzonMedia  6 дней назад +3

      How low? The redux model itself is very small, only 129MB so if you have low vram gpu just use the GGUF flux models and you should be good to go! Runs great on my 3060Ti 8GB VRAM with Q8 GGUF model.

    • @bause6182
      @bause6182 6 дней назад +1

      ​@@MonzonMediathank you, do we need another workflow for gguf models?

    • @ApexArtistX
      @ApexArtistX 5 дней назад

      @@MonzonMediaQ8 what’s the size of

  • @hotlineoperator
    @hotlineoperator 3 дня назад +1

    Some people keep several functions or "workflows" on one desktop, which they turn on and off as needed. Others keep separate workflows on completely different desktops or use them one-by-one. Is there a convenient function in ComfyUI that allows you to switch between different Workflows, as if you have several desktops open and choose the one that suits what you are doing.

    • @MonzonMedia
      @MonzonMedia  3 дня назад +1

      The new ComfyUI has a workflow panel on the left that allows you to select your saved workflows or recently used. Alternatively there is a fairly new tool I've been trying out called Flow that has several workflows pre-designed. Downside of it is that you can't save custom workflows yet but I hear that option will come soon. I'll be doing a video on it soon. Other than that, yeah it really is a personal thing on what works best for you.

  • @eledah9098
    @eledah9098 4 дня назад +1

    Is there a way to include LoRA models for inpainting?

    • @MonzonMedia
      @MonzonMedia  3 дня назад

      Not sure what you mean? Do you want to use a lora to inpaint? It doesn't work that way.

  • @RiftWarth
    @RiftWarth 5 дней назад +1

    Could you please do a video on crop and stitch with Flux tool inpainting?

    • @MonzonMedia
      @MonzonMedia  5 дней назад

      Yes of course! Will be doing it on my next inpainting video 👍

    • @RiftWarth
      @RiftWarth 5 дней назад +1

      @MonzonMedia Thank you so much. Your tutorials are really good and easy to follow.

  • @ProvenFlawless
    @ProvenFlawless 6 дней назад +1

    Huh. What is the difference between xlabs and Shakker-Labs controlnets of canny/depth. Why is this special? We already have two of them. Someone please explain.

    • @MonzonMedia
      @MonzonMedia  5 дней назад +1

      From what I recall they have 2. The union pro controlnet (6GB) which is an all in one controlnet with multiple controlnets. It's pretty decent but still needs more training. They also have a separate depth model that is 3GB, this one is only 1.2GB. I've yet to do side by side comparisons yet though. It was the same with SDXL, we will get other controlnets from the community until one is trained better. Keep in mind controlnet for flux is still very new.

  • @generalawareness101
    @generalawareness101 4 дня назад +1

    How do I get flux to inpaint text? I have tried everything when all I want is to take an image and have flux add the text it generates over it.

    • @MonzonMedia
      @MonzonMedia  4 дня назад

      Same way you would prompt for it, just state in your prompt something like "text saying _________" and inpaint the area you want it to show up.

    • @generalawareness101
      @generalawareness101 4 дня назад

      @@MonzonMedia Tried doing that for a few days it just never worked. I could say a lake, or an army, or whatever and that it would do, but never the text. Stumped.

  • @rogersnelson7483
    @rogersnelson7483 2 дня назад +1

    I tried both the big model and the FP8. Nothing but really BAD results. I don't know why. I'm using a 8 Gig VRam.
    All I get is random noise around the the outpaint areas and the original image is changed to mostly to noise.
    Also should it take 6 to 10 minutes for 1 image?

    • @MonzonMedia
      @MonzonMedia  2 дня назад

      I'm going to do a follow up video on inpainting. What is shown here is very basic and sometimes not the best results. There are a couple other nodes that will help to get better results. Stay tuned!

    • @rogersnelson7483
      @rogersnelson7483 2 дня назад +1

      @@MonzonMedia Thanks for you reply. I'll be watching. Keep up the good work as usual.
      Man, I started watching you at Easy Diffusion.

    • @MonzonMedia
      @MonzonMedia  День назад

      Whoa! That's awesome! 😁 I appreciate the support since then and now.

  • @Xenon0000000
    @Xenon0000000 3 дня назад +1

    When I try the outpainting workflow, the pictures come out all pixelated, especially the added part. What am I doing wrong? I'm using the same parameters, denoise is already at 1.
    Thank you for your videos by the way, you should have way more subs!

    • @MonzonMedia
      @MonzonMedia  3 дня назад +1

      @@Xenon0000000 appreciate the support and kind words. Are you using a high flux guidance? 20-30 works for me.

    • @Xenon0000000
      @Xenon0000000 3 дня назад +1

      @@MonzonMedia I left it at 30, I'll try changing that parameter too, thank you.

    • @MonzonMedia
      @MonzonMedia  День назад +1

      I have a much better workflow that I'll be sharing with you all soon that gives better results. Hope to post it some time tomorrow (Wed).

  • @baheth3elmy16
    @baheth3elmy16 5 дней назад +2

    With inpainting, it disturbs the composition of the image, the results are not that good, so are the results for the outpainting, the final images are distorted at the edges and lost details, using the fp8 model.

    • @MonzonMedia
      @MonzonMedia  5 дней назад

      I've had a good experience so far with both inpainting and outpainting. Make sure you are increasing the flux guidance. There are other methods to doing inpainting that should help with the original composition which I will cover soon.

  • @RikkTheGaijin
    @RikkTheGaijin 6 дней назад +1

    SwarmUI tutorial please

  • @ApexArtistX
    @ApexArtistX 5 дней назад +1

    You have 8vram ? I have 8vram and it crash on all non gguf models . How are you able to load 11 gb model on 8 vram 😮

    • @MonzonMedia
      @MonzonMedia  5 дней назад

      Yup, runs on low vram mode and off loads the rest on system ram. Should be the same for you. It’s not super fast but flux models take roughly a minute depending on the model I’m using and size of the image. How much system ram you have?

  • @sokphea-h5q
    @sokphea-h5q 4 дня назад

    Error(s) in loading state_dict for Flux: size mismatch for img_in.weight: copying a param with shape torch.Size([3072, 384]) from checkpoint, the shape in current model is torch.Size([3072, 64]). how i'm can do ?

    • @benkamphuis5614
      @benkamphuis5614 4 дня назад

      same here!

    • @MonzonMedia
      @MonzonMedia  4 дня назад

      Did you do an update?

    • @_O_o_
      @_O_o_ 2 дня назад

      I had the same problem. My DualCLIPLoader Type was set to "Sdxl" not "Flux" ... maybe that helps haha