ComfyUI Tutorial Series: Ep19 - SDXL & Flux Inpainting Tips with ComfyUI

Поделиться
HTML-код
  • Опубликовано: 26 дек 2024

Комментарии •

  • @pixaroma
    @pixaroma  Месяц назад

    Join the conversation on Discord discord.gg/gggpkVgBf3
    You can now support the channel and unlock exclusive perks by becoming a member:
    pixaroma ruclips.net/channel/UCmMbwA-s3GZDKVzGZ-kPwaQjoin
    Check my other channels:
    www.youtube.com/@altflux
    www.youtube.com/@AI2Play

  • @Uday_अK
    @Uday_अK Месяц назад +7

    Thank you. I have been waiting for this in-painting and out-painting video for a long time. There are many tutorials available on RUclips, but understanding the method and implementing it is very important. You have explained it in a very simple way.

    • @pixaroma
      @pixaroma  Месяц назад +1

      thanks uday 🙂

  • @Patricia_Liu
    @Patricia_Liu Месяц назад +2

    Thanks for all your hard work! I really appreciate the effort you put into your tutorials.

    • @pixaroma
      @pixaroma  Месяц назад +2

      Thank you so much for your support 🙂

  • @ob3ythee.t.128
    @ob3ythee.t.128 16 дней назад

    Very well done tutorial series, very easy to follow, I also recommend if anyone does not want to setup ComfyUI in the same way instead use Stabilitymatrix

  • @talismanna
    @talismanna Месяц назад +4

    That white tshirt issue bugged me, and since I'm not an Adobe user, I stumbled into a different route. Using the same mask, I used the prompt "nude" and the red shirt was gone. Copying that across I used "white tshirt" and that worked! Also... "black hat on head" worked without leaving comfy... I got lucky with that prompt. Thanks for another great workflow.

  • @sergeysaulit
    @sergeysaulit Месяц назад +1

    Simple, accessible and understandable! You are the best one who delivers complex information! Thank you and don't stop!

  • @59Marcel
    @59Marcel Месяц назад

    Great tutorial! You made everything so clear and easy to follow - thanks for breaking Inpainting down so well!

  • @SebAnt
    @SebAnt Месяц назад +1

    Wonderful tutorial once again.
    I like that you explained about denoise and also demonstrated photoshop techniques at the end! I will try out once I get some free time…

    • @pixaroma
      @pixaroma  Месяц назад

      thank you 🙂

    • @cives
      @cives Месяц назад

      Me, on the other side, I prefer no PS here. Or, if there is any, not using the AI into it, so that we can use as well GIMP alternatively. The magic of this channel is its focus on ComfyUI and its freedom to let us do what we want, under our control. I moved away from Leonardo and the like looking for that. Besides, PS is the exact opposite of free (in both meanings). If this channel gears into more PS, it will lose me. There are other fantastic channels out there, and this one will be losing its edge by losing its focus.

  • @pollapatjampeeklang7493
    @pollapatjampeeklang7493 Месяц назад

    It was worth the wait; you never truly disappoint.

  • @RealNazrax
    @RealNazrax Месяц назад +2

    I was hoping you'd cover inpainting! Now, please do IP Adapter :)

  • @baheth3elmy16
    @baheth3elmy16 Месяц назад +1

    Great episode! Thank you very much!!!!

  • @SumoBundle
    @SumoBundle Месяц назад +1

    Thank you for another fantastic episode!

  • @emergentcomics4926
    @emergentcomics4926 Месяц назад

    I've been waiting on this one. Thanks so much! I can't wait to give it a try

  • @Jinjinyajin
    @Jinjinyajin Месяц назад

    Love the tutorial details and it's easy to understand

  • @alexandrapadureanu4192
    @alexandrapadureanu4192 Месяц назад +1

    New interesting stuff ❤

  • @maxmad62tube
    @maxmad62tube Месяц назад

    very impressive and informative thank you very much

  • @JefHarrisnation
    @JefHarrisnation Месяц назад

    Love your tutorials.

  • @UmarandSaqib
    @UmarandSaqib Месяц назад +1

    awesome!

  • @arep7424
    @arep7424 Месяц назад

    For the color problem and fine tuning denoise, just add a controlnet ( depth are good ) with the cropped image as reference. this allow higher denoise ( even 1) on sampler without deform and transfigure the subject

    • @pixaroma
      @pixaroma  Месяц назад

      thank you, I will give it a try :)

  • @Filokalee999
    @Filokalee999 Месяц назад

    Thanks! Very clear and valuable Inpaint tutorial. While trying a similar workflow, using Differential Diffusion node in combination with InpaintModelConditioning seems to have better integration results than just InpaintModelConditioning alone. Additionally if this node refuses to inpaint the shirt, try using another Inpaint model, (such as BrushNet).

  • @hannibal911007
    @hannibal911007 Месяц назад +1

    Hi. Thanks for this valuable serie on comfyui. I would suggest you an episode on How to train a lora on comfyui, which workflow depending on check point used, best ckpnt you May suggest us to try... Keep going and thnx

    • @pixaroma
      @pixaroma  Месяц назад +1

      thanks, yes people keep asking me, just didnt find an easy way to do it locally, only online and that is not free. The method I tried are harder to install and gives errors, that I am not able to explain the community how to fix them, since I am a designer not a coder. So I have been waiting for an easy solution, like fluxgym can work sometimes but still give some errors sometimes

  •  Месяц назад

    Exciting as always! Could you share the Inpainting and Outpainting workflow so that I don't have to rush from the video?

    • @pixaroma
      @pixaroma  Месяц назад +1

      Is on discord on the pixaroma-workflows channel link to discord in the header of the channel or in video description

    •  Месяц назад

      ​@@pixaromaOh, I found it, thank you!

    • @pixaroma
      @pixaroma  Месяц назад

      this is the direct link, mention pixaroma there if you still cant find it discord.com/channels/1245221993746399232/1270589667359592470/1300813045127188591

  • @bratan007
    @bratan007 Месяц назад

    Thank you for great tutorial! How do you do workflow tabs in ComfyUI?

    • @pixaroma
      @pixaroma  Месяц назад

      go to settings (that gear wheel) and search for workflow, look for Opened workflows position, and choose Topbar instead of sidebar

  • @radedr007
    @radedr007 Месяц назад

    great video

  • @kattamaran
    @kattamaran Месяц назад +1

    Add a depth controlnet to have the generation more similar. Specially in flux

  • @damnned
    @damnned Месяц назад

    Fooocus inpaint is the king I think

    • @pixaroma
      @pixaroma  Месяц назад

      i think I saw someone using a fooocus model in comfyui, but not sure.

  • @TimesNewRomanAI
    @TimesNewRomanAI Месяц назад

    Thank you very much for all the information and how easy it is to follow. What pc configuration do you have and what time do you get in the image generation?

    • @pixaroma
      @pixaroma  Месяц назад

      Rtx4090 24 gb of vram, 128 gb of ram. Depends on the model between 3-15 seconds, sdxl is fast and flux takes like 14-15 seconds to generate an image

    • @TimesNewRomanAI
      @TimesNewRomanAI Месяц назад

      @@pixaroma Thanks for the info. 15 seconds is really fast

  • @r3vdev
    @r3vdev Месяц назад +1

    When are you going to do an IMG2VIDEO tutorial???? :)

    • @pixaroma
      @pixaroma  Месяц назад +1

      when there is a good video model, so far the video models I saw are not really usable, compared with Kling ai for example that create decent video

  • @sollmasterdoodle
    @sollmasterdoodle Месяц назад

    Thx! But what’s the version of your comfy ui?

    • @pixaroma
      @pixaroma  Месяц назад

      I i go to manager and scroll down in the right I see this ComfyUI: 2797[770ab2](2024-10-29)
      Manager: V2.51.8
      As for release is v0.2.5

    • @sollmasterdoodle
      @sollmasterdoodle Месяц назад

      @@pixaroma thx but is the .exe ? the ComfyUI Desktop V1? I'm waiting for

    • @pixaroma
      @pixaroma  Месяц назад +1

      @@sollmasterdoodle no, that is still in beta from what I know and if you are not on the list dont have access to it yet

    • @sollmasterdoodle
      @sollmasterdoodle Месяц назад +1

      @@pixaroma thx a lot too you!Perfect video!

  • @audiogus2651
    @audiogus2651 Месяц назад

    Is there a Comfy node that lets you paint on the image like in Forge?

    • @pixaroma
      @pixaroma  Месяц назад

      I dont know one, but there are so many nodes, I am sure there are some that let you do that

  • @CharlesPrithviRaj
    @CharlesPrithviRaj 21 день назад

    how do we include the new flux canny and depth lora with the inpainting workflow. How to combine pix to pix instruct and the inpainting condition nodes

    • @pixaroma
      @pixaroma  20 дней назад

      you can connect the lora just like i did at the end of episode 24, but I remember I tried something and didnt work quite as expected, maybe because lora need 10 for flux guidance and inpainting didnt need so much or something, or maybe i didn put the right settings

    • @CharlesPrithviRaj
      @CharlesPrithviRaj 20 дней назад

      @@pixaroma Yes I was confused which latent would go to the ksampler. Is it the one from inpaint conditioning or the one from pix to pix. See if you can figure it out.

    • @pixaroma
      @pixaroma  20 дней назад +1

      @@CharlesPrithviRaj Ah I see what you mean, probably that why I didnt get the result I expected I think i skipped one of the inpaint nodes :)) I didnt test it but you can try with one of this nodes, LatentAdd or LatentBlend and that comes from both inpaint node and go to one of that latent node and from that node you connect to ksampler, so you combine both latent, in theory should work, but as I said I didnt test it, let me know if it works

  • @Gabbaki
    @Gabbaki 15 дней назад

    Can you tell me how I can change the color of a garment with such a small workflow? But without changing the garment! Only the color.

    • @pixaroma
      @pixaroma  15 дней назад

      Is hard to keep things intact, i will probably just change the color in Photoshop and use image to image with low denoise to better blend the colors.

  • @yasinolgun7548
    @yasinolgun7548 Месяц назад

    In the introduction part of the video, the female picture appears in motion. How can I do that, what should I research?

    • @pixaroma
      @pixaroma  Месяц назад

      I used image to video on the platform klingai.com/ so far i didn't find a free options that does that good like locally that why i used online platform

  • @ian2593
    @ian2593 Месяц назад

    After upgrading to the new interface, I can't tell where my workflows are being saved to. Is there some way to tell?

    • @pixaroma
      @pixaroma  Месяц назад +1

      you can click on export and put it in any folder you want, if you go to top left corners and workflow and choose export, and choose a folder. and with workflow open you can open it. If you save it with save as or save it will go into the folder workflows, the path is something like this ComfyUI_windows_portable\ComfyUI\user\default\workflows

  • @topy706
    @topy706 Месяц назад

    is there a way to increase the batch size?

    • @pixaroma
      @pixaroma  Месяц назад

      From queue you have batch size 1, you increase that number

    • @topy706
      @topy706 Месяц назад

      @@pixaroma but in queue thats batch count not batch size. i tried to add a "repeat latent batch" node but the stitch node does not like it

    • @pixaroma
      @pixaroma  Месяц назад

      I don't know any, i usually just use that batch and check in a few minutes or use increment queue, so only if you find a node to do that

    • @topy706
      @topy706 Месяц назад

      @@pixaroma okay thank you anyways. 👍

  • @AInfectados
    @AInfectados Месяц назад

    And what about the *Controlnet Inpaint BETA* for FLUX?

    • @pixaroma
      @pixaroma  Месяц назад +1

      I didnt try it yet since it was flux + control net so it will take extra time to generate since is loading an extra model, but I will play with it to see if can get better results

  • @bentontramell
    @bentontramell Месяц назад +1

    Flux chin sighted. 😅😊

    • @pixaroma
      @pixaroma  Месяц назад +2

      😁 can be fixed with sdxl Inpaint 😂

  • @bubuububu
    @bubuububu Месяц назад

    Excuse my amateur question, but how can I batch more images?

    • @pixaroma
      @pixaroma  Месяц назад

      Depends, you can use a load image from itools node from example that let you take images from a folder, or you can use batch the one next to queue to run the workflow multiple time, so depending on what you need to do. Chech episode 15 if you want to load a folder with images

    • @bubuububu
      @bubuububu Месяц назад

      @@pixaroma I don't think i'm understanding. So for example, if I want to get 10 result for different glasses on the same face. Where to put the batch size?

    • @pixaroma
      @pixaroma  Месяц назад

      @@bubuububu is hard to explain in text, would have been much easier to show screenshot on discord. If you have the new interface with that floating QUEUE button, that has a number next to it like 1, you can increase that number to 10 and it will run that workflow 10 times, so if your seed is not fixed usually is set to randomize will run 10 times each time with a different seed so different results and it will stop. If you have the old interface that as the button QUEUE PROMPT then under it you have a check mark Extra Options, and there you have batch count that default is one, so you can put 10 there and run it.

    • @bubuububu
      @bubuububu Месяц назад +1

      @@pixaroma thanks a lot. i knew i was missing something basic 😅

  • @eros6398
    @eros6398 7 дней назад

    friend i have an annoying problem i just cant find the florence2 imageprompt node

    • @pixaroma
      @pixaroma  7 дней назад

      there are two, i use this one ComfyUI-Florence2 by kijai yo can find it in manager. this is their page github.com/kijai/ComfyUI-Florence2 look how i use it in episode 11 ruclips.net/video/yutYU97Bj7E/видео.htmlsi=EZ2yHl6-7tsbGapX

    • @eros6398
      @eros6398 7 дней назад

      @@pixaroma thank you men

  • @icedzinnia
    @icedzinnia Месяц назад

    When I did this tutorial, it didn't work (5:52). I realized after much fiddling around, using different loaded images, different masks, that it won't work if the mask is not continuous. I couldn't make it work when i had a mask on the far right edge and a seperate one on the far left edge. WEIRD.

    • @pixaroma
      @pixaroma  Месяц назад

      I usually do one selection, there is on the inpaint crop an option called fill_mask_holes, maybe try to turn it off to see if helps, if not maybe is some bug

  • @OaklandSignalCollapse
    @OaklandSignalCollapse 21 день назад

    Thanks for not loading this up with a bunch of BS

  • @jessedbrown1980
    @jessedbrown1980 Месяц назад

    I am trying to do a single transform inpainting with flux 1.1 dev trained loras and the same model to do inpainting The model is trained on materials and when I run it through with a mask, the whole picture chages instead of the mask, I might be missing a node, but I just wat the masked area to be changed and it always changes the whole picture and outs artifacts in them
    I already have lora strgh to 1 and I need to play with a lot, but I have not been able to control the mask area or the picturre, so weird! The setup you have does not have a lora in it, which would be effective, anyone have a setup with a lora?

    • @pixaroma
      @pixaroma  Месяц назад +1

      I didn't try it with lora yet

    • @jessedbrown1980
      @jessedbrown1980 Месяц назад

      @@pixaroma Would you be interested in colaborating? I need to get it done! good experience too, we have like 700 loras trained waiting to be used!!

    • @pixaroma
      @pixaroma  Месяц назад +1

      @@jessedbrown1980 i am checking now the message on discord

  • @devon9374
    @devon9374 Месяц назад +1

    Man, Adobe is cooked...