ComfyUI FLUX - Accurate & Easy Inpainting Technique

Поделиться
HTML-код
  • Опубликовано: 18 дек 2024

Комментарии • 125

  • @philippeheritier9364
    @philippeheritier9364 4 месяца назад +19

    perfect tutorial without unnecessary blah blah THANKS!

  • @stepfury
    @stepfury 3 месяца назад +4

    TIPS: After install the nodes you can download his workflow in the description and skip to 4:33.

  • @NeonXXP
    @NeonXXP 4 месяца назад +12

    I found reducing conditioning to 2.2 really helps keep the inpaint in line with the image aesthetic.

  • @cXrisp
    @cXrisp 4 месяца назад +3

    This works great for me! Looks to be my go-to inpainting method for now. And you are my go-to channel for ComfyUI workflows! Don't change a thing.

  • @59Marcel
    @59Marcel 4 месяца назад +3

    Great job. I usually don't like the non verbal tutorials but yours worked very well especially at 0.5 speed made it easer to follow. Still I'm looking forward to when you add the voice over. I have subscribed. Thank you.

  • @TohkaTakushi
    @TohkaTakushi Месяц назад

    This was wildly helpful. Easy to setup and worked the first time.

  • @williamlocke6811
    @williamlocke6811 4 месяца назад +4

    That's fantastic. I've just started learning Comfy and the speed that you work and the relaxing background music really help my brain not to freak out. Great video! :)

    • @CgTopTips
      @CgTopTips  4 месяца назад +1

      Thanks, I will be adding voice over to the videos soon

    • @marcosgrima2237
      @marcosgrima2237 3 месяца назад

      @@CgTopTips good, very good, any question, The same worklow but adding lora?

  • @stropol111
    @stropol111 3 месяца назад

    You are the best!!! your videos are very useful and informative. I'm a beginner and I figured it out right away, though I thought a little about where to move which models. the photos are very realistic THANKS!!!

  • @jonmichaelgalindo
    @jonmichaelgalindo 4 месяца назад +3

    Surprising. I'll test after work.

  • @elite_pencil
    @elite_pencil 2 месяца назад

    Wow i learned a lot, i even feel better from the music. Am i the only one that goes kinda crazy trying to figure this stuff out at times? Especially training loras. The nice music is very, very welcome 👍 Liked subbed and saved

  • @Евгений-м6ы3и
    @Евгений-м6ы3и 2 месяца назад

    I tried this in the beta version of photoshop, as a result I got a different face. This workflow is noticeably better

  • @LudovicCarceles
    @LudovicCarceles 4 месяца назад

    Great stuff! I replicated it but with the standard checkpoint loader and KSampler with the Flux tensorpoint

  • @TransformXRED
    @TransformXRED 4 месяца назад +3

    This workflow with the new smaller version of the model would be great

  • @marweiUT
    @marweiUT 3 месяца назад

    Thank you, that is simply great!

  • @IONT781
    @IONT781 4 месяца назад +7

    I followed step by step everything shown in the video and everything worked perfect! Are there any resources you can recommend to learn more about how to understand each node and why it can be placed there? There is quite a lot of information but it is a bit difficult to know which one is the most appropriate as there is too much!!! Thanks for the work you are doing and how detailed your videos are :D

  • @jahinmahbub8237
    @jahinmahbub8237 3 месяца назад

    This is the sort of tutorials that I like. A fucking 10 year old can follow you in this tutorial. Excellent.

  • @BenSant
    @BenSant 2 месяца назад +1

    Anyone else getting an Out of Memory error on the Sampler Custom Advanced node when processing? I'm running a RTX 4090 and 128gb RAM. Both Cuda and Pytorch are updated as well as attempting the 8 vs 16 flux models and NF4 node. I'm hunting in the Git forums, but haven't found a solution.

  • @WiLDeveD
    @WiLDeveD 4 месяца назад +2

    very useful Tutorial. thanks. Could we use this method or image to image for manipulating objects , like desk or ladder ?

  • @Ton_DayTrader
    @Ton_DayTrader 3 месяца назад +1

    please help i see this issue "Invalid background color: image"

    • @renegat552
      @renegat552 3 месяца назад +2

      A new entry called “background_color” has been added to the “Load&Resize note”. It has the value “image”. Simply delete this value.

  • @AL_Xmst
    @AL_Xmst 4 месяца назад

    Awesome! Great job!

  • @tetsuooshima832
    @tetsuooshima832 4 месяца назад +1

    Can I ask what the Differential Diffusion node is for ? Also, by any chance do you know why nobody seem to be using fp8_e5m2 dtype for the model ?

    • @CgTopTips
      @CgTopTips  4 месяца назад +3

      DiffDiff is a form of inpainting that sets a denoise strenght per-pixel depending on the mask you pass it

  • @nikitaoleinik4386
    @nikitaoleinik4386 4 месяца назад +4

    It seems like deleted mask isn't removing from inpaint properly. After cloth swap, you are swapping her hair and the mask only on hair, but clothes also swapping. As with face swap, her hair changes, but now clothes are okay. Bug? Great tutorial btw.

    • @klauskranewitter3635
      @klauskranewitter3635 4 месяца назад +1

      I experience the same problem. No matter what I do, the ganguro makeup reappears in all later images 😂 Do you have a fix how to clear the deleted masks for good? I think the mask becomes part of InpaintModelConditioning and stays there but I don´t know how to repair this. Thank you for the great tutorial!

  • @МаринаСергеева-о2т
    @МаринаСергеева-о2т Месяц назад +1

    Постоянно выдает ошибку. Given groups=1, weight of size [320, 4, 3, 3], expected input[1, 16, 135, 102] to have 4 channels, but got 16 channels instead Что она значит?

  • @CosmicFoundry
    @CosmicFoundry 2 месяца назад +1

    which workflow on your page? its confusing because they named differently than the YT video? good stuff, thanks!

  • @ZakariaNada
    @ZakariaNada 4 месяца назад +1

    Great content. Pleasse share the haircut sheet as well. Thanks

  • @camelCased
    @camelCased 4 месяца назад +2

    Ok, now I know what's ganguro makeup is :)

    • @DinJerr
      @DinJerr 4 месяца назад +1

      Whoever taught Flux what ganguro makeup looks like needs to make a trip to Japan.

  • @Blucee-w3k
    @Blucee-w3k 3 месяца назад +1

    LoadAndResizeImage, ImpactGaussianBlurMask, Anything Everywhere is missing, how/ where to download this ?

  • @zakg60
    @zakg60 4 месяца назад +1

    'diffusion_pytorch_model.safetensors' does not support control_type 'inpaint/outpaint'. ........ what does this mean ive tried the file with the original and your recommendation name change?

  • @Markgen2024
    @Markgen2024 3 месяца назад +2

    I was so pump when this one came out but so sad didn't work for me cause this error appeared: Error occurred when executing LoadAndResizeImage: Invalid background color: image. I changed the color image background to use masking color but its a bad result. I hope this one will work on flux models.

    • @renegat552
      @renegat552 3 месяца назад

      A new entry called “background_color” has been added to the “Load&Resize note”. It has the value “image”. Simply delete this value.

  • @UnchartedWorlds
    @UnchartedWorlds 4 месяца назад

    Incredible!

  • @teambellavsteamalice
    @teambellavsteamalice 3 месяца назад

    Would it be nice/possible to adjust this to a generic process, where you have a face, an expression, a hairdo, a body, an outfit, a pose, a background and combine all of them in a final image?
    Would such a flow be very complicated, and would it keep all elements consistent?

  • @sandocarlos1
    @sandocarlos1 4 месяца назад

    I don't have the (load and resize image) box installed. What do I have to install for it to appear? My native language is Spanish and excuse my way of writing.

  • @erans
    @erans 2 месяца назад

    How do this work without a controlnet model like openpose? in recognizing what way to put the new clothes on?

  • @xyzxyz324
    @xyzxyz324 3 месяца назад

    Thank you very much for the video. In my opinion, a simple inpaint (like fixing a small area in an image) should not contain such nodes. Arent there any combined nodes at least to reduce complexity of generation process ?

  • @zRegicideTVz
    @zRegicideTVz 4 месяца назад

    How can I use the newest schnell fp8 model with this? Rename the file to sft and copy to unet folder?

  • @frustasistumbleguys4900
    @frustasistumbleguys4900 4 месяца назад +1

    can we use ipadapter for face with flux?

    • @colinfransch6054
      @colinfransch6054 4 месяца назад

      I don’t think so but there is a controlnet model available

    • @CgTopTips
      @CgTopTips  4 месяца назад +1

      Unfortunatly, Flux doesn't support ControlNet and IPAdapte yet

    • @colinfransch6054
      @colinfransch6054 4 месяца назад

      @@CgTopTips check out canny controlnet model by xlabs Ai

    • @colinfransch6054
      @colinfransch6054 4 месяца назад

      @@CgTopTips okay I understand

    • @Im_JustKev
      @Im_JustKev 4 месяца назад

      @@colinfransch6054 There is a contolnet now. However, many people have been having errors and it's only one type of control. Canny. It's Alpha. For the Dev model only. I'm sure more will be coming soon. AI comes at you fast. I don't think I can link it here but a search should find it. I couldn't get it to work personally.

  • @marcosgrima2237
    @marcosgrima2237 3 месяца назад

    The same worklow but adding lora?

  • @huichan5140
    @huichan5140 4 месяца назад

    May I know what is the difference between inpaintmodelconditioning and set latent mask? I tried both cannot see obvious difference

    • @CgTopTips
      @CgTopTips  4 месяца назад +1

      The difference lies in the types of inputs and outputs for each node, and in this workflow, we used InpaintModelConditioning to achieve the desired results.

  • @twalling
    @twalling 2 месяца назад

    I can't see the workflow anywhere in the description... can someone point me out?

  • @HealthHelper007
    @HealthHelper007 4 месяца назад

    I loved the tutorial. Nice pace and flow of building the workflow. I have managed to copy the exact workflow but somehow it's not working. I get the same image duplicated on the output, minor artefacts to show that it did see the mask but didn't actually change the clothes of anything. Does anyone have any ideas?

  • @krasen671
    @krasen671 4 месяца назад

    it works fine but is there a better way to generate a single person in the photo without the ai cutting them off by making them too big?

    • @CgTopTips
      @CgTopTips  4 месяца назад

      Try photoshop :)

  • @mvamorim
    @mvamorim 4 месяца назад

    How to show this manager for custom nodes? Mine dont have this

  • @jonw377
    @jonw377 4 месяца назад

    Great tutorial. The only thing that I miss is being to make a batch instead of a single image. I tried putting in an empty latent node, but it did not work.

  • @3k3k3
    @3k3k3 4 месяца назад

    Thanks!

  • @jasontaylor4582
    @jasontaylor4582 2 месяца назад

    i tried to reproduce your great results at 5:48, and all the hair is "small" meaning its just like, around the skull. Even when I prompt "big hair, 1980s hair," or some of the hair styles you have, it all just crops to the head. Is there some trick to getting it big? I kept making my mask bigger and bigger, and no love.....thanks!

  • @Mnbir
    @Mnbir 4 месяца назад +1

    awesome

    • @CgTopTips
      @CgTopTips  4 месяца назад +1

      Thanks, I will soon make a video about combining IPAdapter and Flux

    • @Mnbir
      @Mnbir 4 месяца назад

      @@CgTopTips Yes I am already trying with InstantID and Flux Dev, so far unsuccessful, I will wait for your video. Thanks

  • @OO-yj7ld
    @OO-yj7ld 3 месяца назад

    I cant find the custom nodes in the manager

  • @kirikoverwatch
    @kirikoverwatch 3 месяца назад

    LoadAndResizeImage
    Invalid background color: image

  • @OmniEngine
    @OmniEngine 4 месяца назад

    You rock!

  • @sundayevents1411
    @sundayevents1411 3 месяца назад

    Excellent Hard Work But it's only useful for technical background users

  • @AInfectados
    @AInfectados 3 месяца назад

    How i use GGUF Models?

  • @Anrodj3
    @Anrodj3 4 месяца назад

    thanks bud

  • @BibhatsuKuiri
    @BibhatsuKuiri 4 месяца назад

    is the 23GB justified ? or old technique to use sdxl.is good enough... anyone compared ?

    • @CgTopTips
      @CgTopTips  4 месяца назад

      24gb vram is good, more vram more speed

    • @ickorling7328
      @ickorling7328 4 месяца назад

      ​@@CgTopTipsso wait. Why not go for AMD APU with XDNA. You'd use ryzen ai and ONNX from huggingface x AMD collab to convert CUDA models to AMD models. Using a windows machine it allocate system ram as vRAM dynamically. Biggest bottleneck for AI economy is crushed 😍

    • @-Jakob-
      @-Jakob- 4 месяца назад

      @@ickorling7328 it's slow though

  • @AInfectados
    @AInfectados 3 месяца назад

    Missing Nodes:
    -Anything Everywhere
    -ImpactGaussianBlurMask
    -LoadAndResizeImage
    And can't find them in Manager.

  • @markbarnes-d8p
    @markbarnes-d8p 4 месяца назад +6

    I love Flux but this looks far too complicated for the average person.

    • @4.0.4
      @4.0.4 4 месяца назад

      It's not for the average person. The average person would use ChatGPT and pay $20 a month for DALL-E or however much Adobe Firefly costs.

    • @LukasBazyluk1117
      @LukasBazyluk1117 4 месяца назад +7

      The non-average people are very happy about it.

    • @OrbitalCookie
      @OrbitalCookie 3 месяца назад +1

      There are different tools, you can also pay subscription and get stuff already configured

    • @FantasyFlix777
      @FantasyFlix777 2 месяца назад

      Happy to hear it

    • @ario9907
      @ario9907 Месяц назад +1

      You have definitely not seen a complicated comfy ui workflow yet then

  • @make7582
    @make7582 4 месяца назад

    Is it possible to upload a mask? I need to change the background behind people.

    • @CgTopTips
      @CgTopTips  4 месяца назад

      Yes, but you need to make some changes in the workflow. Since you only want to change the background of an image, the simplest way is to select it manually

  • @ZeroCool22
    @ZeroCool22 4 месяца назад

    All good and fun, but the VRAM reqs. need to go down.

  • @JohnSmith-u8h
    @JohnSmith-u8h 2 месяца назад

    If I wanted to add a LORA where would I add that

  • @ArtesGraficasDigitales
    @ArtesGraficasDigitales 4 месяца назад

    I can´t enter this black screen to do workflow

    • @CgTopTips
      @CgTopTips  4 месяца назад

      Make sure all nodes's connection are correct. You can send a screenshot of your workflow to me via email

  • @yiluwididreaming6732
    @yiluwididreaming6732 4 месяца назад +1

    first run using model on 4090ti, 16gbvram - 10 minutes load and process..... second run once model loaded 59seconds.... still testing......weird results so far...only difference I can see as opposed to other inpainting such as brushnet and powerpoint is you get text with this model....

    • @Kamihakker
      @Kamihakker 4 месяца назад +1

      First time always be slow, because is loading the model, and depending on your HDD/SSD/NVme, that's will take time. In my case, if I use the NVme, it takes just a few minutes, if I use my HDD it takes looooong time... I have a 4090 as well.

  • @ioan_astro
    @ioan_astro 4 месяца назад

    i keep getting a black image at the end:(

    • @CgTopTips
      @CgTopTips  4 месяца назад

      Make sure you've selected the appropriate models for each node according to the video

  • @haljordan1575
    @haljordan1575 4 месяца назад +1

    The most important part missing is styles

    • @CgTopTips
      @CgTopTips  4 месяца назад +2

      I will soon make a video about combining IPAdapter and Flux

    • @hellfire3278
      @hellfire3278 4 месяца назад

      I will wait for it. I want to try copying a face on a body with inpainting and IPadapter. Is it possible?

  • @EverlongEditz
    @EverlongEditz 3 месяца назад

    does ComfyUi charge you by the hour?

  • @JMTChina-vo7rb
    @JMTChina-vo7rb 4 месяца назад

    can anyone share the workflow json file?

    • @CgTopTips
      @CgTopTips  4 месяца назад +1

      Please find in description (openart link)

  • @fun7704
    @fun7704 3 месяца назад +1

    this is probably a good tutorial and stuff, but let's take a moment to realize how unnecessary complicated the comfy-ui is in comparison to automatic1111. I mean, all the stuff you have to set-up beforehand..

  • @plattendoktor
    @plattendoktor 4 месяца назад

    Geil 👍

  • @RagonTheHitman
    @RagonTheHitman 4 месяца назад

    we got no in-paint model yet?!

  • @AI_Image_Master
    @AI_Image_Master 4 месяца назад +1

    Good job. But it shows why I hate ComfyUI. You need to understand all those connections perfectly and what each node exactly does. Yes it is more powerful then Automatic 1111 but certainly not more user friendly.

    • @CgTopTips
      @CgTopTips  4 месяца назад

      I suggest that you follow the workflows for a few months to gradually become familiar with the applications of the nodes

    • @AI_Image_Master
      @AI_Image_Master 4 месяца назад

      @@CgTopTips Thanks. I have been using it for a while now and I know the nodes and what to connect, but it can be difficult for someone that is just learning. Good video for those learning. I just started testing the flux. Your example worked perfectly.

  • @patagonia4kvideodrone91
    @patagonia4kvideodrone91 4 месяца назад

    need use automatic mask nodes

  • @DJHUNTERELDEBASTADOR
    @DJHUNTERELDEBASTADOR Месяц назад

    SAlen bien engomadas .. prefiero mas Stable Difusión a nivel profesional

  • @the_RCB_films
    @the_RCB_films 4 месяца назад

    why not just publish the workflow so we can download it?

    • @CgTopTips
      @CgTopTips  4 месяца назад +2

      I have uploaded all the workflows to the Openart website; the link is in the description

  •  4 месяца назад +4

    Please share your workflow!

    • @CgTopTips
      @CgTopTips  4 месяца назад

      Please check description section

    •  4 месяца назад

      @@CgTopTips Oh, yes, thanks!

  • @vifvrTtb0vmFtbyrM_Q
    @vifvrTtb0vmFtbyrM_Q 3 месяца назад

    Too difficult, give me one button solution.

  • @Celenwen
    @Celenwen 3 месяца назад

    Heard the Music, Music stressed me.. goodbye.

  • @surflaweb
    @surflaweb 3 месяца назад

    Rocket science

  • @iamnaudar
    @iamnaudar 3 месяца назад

    We all know what people are gonna do

    • @zengrath
      @zengrath 16 дней назад

      what ever do you mean? like make photos of cats? Yes everyone will make cat photos!

  • @CyberwizardProductions
    @CyberwizardProductions 4 месяца назад

    a tutorial without a voice over is a worthless tutorial

    • @cXrisp
      @cXrisp 4 месяца назад +3

      nah, it's fine.

  • @leandrogoethals6599
    @leandrogoethals6599 3 месяца назад

    It would be nice firstly to put hardware requirements how much vram u need etc.
    And links to the exact models so we don't end up with a dead workflow
    Mine didn't work i got:
    '🔥 - 20 Nodes not included in prompt but is activated'
    model weight dtype torch.float8_e4m3fn, manual cast: torch.float16
    model_type FLOW
    clip missing: ['text_projection.weight']
    Requested to load FluxClipModel_
    Loading 1 new model
    loaded partially 3668.486622619629 3667.77197265625 0
    Unloading models for lowram load.
    0 models unloaded.
    Requested to load TAESD
    Loading 1 new model
    loaded completely 0.0 9.32717514038086 True
    Unloading models for lowram load.
    1 models unloaded.
    Loading 1 new model
    loaded completely 0.0 9.32717514038086 True
    Requested to load Flux
    Loading 1 new model
    loaded partially 3508.182622619629 3505.5234985351562 0
    0%| | 0/20 [00:00