L4: Img2Img Painting in ComfyUI - Comfy Academy

Поделиться
HTML-код
  • Опубликовано: 1 янв 2025

Комментарии •

  • @OlivioSarikas
    @OlivioSarikas  11 месяцев назад +3

    Find my Workflow here: openart.ai/workflows/oliviosarikas/lesson-4---comfy-academy/33ECh584TbdXjPyitkff
    Other Lessons:
    L1: Using ComfyUI, EASY basics : ruclips.net/video/LNOlk8oz1nY/видео.html
    L2: Cool Text 2 Image Trick in ComfyUI : ruclips.net/video/6kHCE1_LaO0/видео.html
    L3: Latent Upscaling in ComfyUI : ruclips.net/video/3W-_B_0F7-g/видео.html

  • @p_p
    @p_p 11 месяцев назад +10

    i have no words to describe my gratitude. Definetly going to buy you some coffe

  • @JBereza
    @JBereza 4 месяца назад +1

    Thanks!

  • @XuRZaL
    @XuRZaL 3 месяца назад

    Thank you, this has been so helpful. ComfyUI is very user friendly once you get past the learning curve (which you have shown me), and you have absolutely made that curve so much shorter. On to L5!!!

  • @JBereza
    @JBereza 4 месяца назад +1

    Outstanding tutorials! Easy to understand and follow. :-) I hope to see more of them :-)

  • @impek667
    @impek667 Месяц назад

    This is so cool!!!!!!!!!!!

  • @vpakarinen
    @vpakarinen 8 месяцев назад +4

    I see why people prefer Comfy UI, you have much more control.

    • @구원자님
      @구원자님 8 месяцев назад +2

      I don't wanna go back to A1111 anymore, myself XD

    • @devon9374
      @devon9374 3 месяца назад

      Reminds me so much of DiVinci Resolve

  • @IsJonBP
    @IsJonBP 11 месяцев назад +1

    I want the next lesson already!! thanks, Olivio!!

  • @joskun
    @joskun 7 месяцев назад

    Catching up slowly hehe.
    Amazing lesson and great instructions.
    Lets go!!!

  • @bunnymeng
    @bunnymeng 10 месяцев назад

    Thank you so much for the detailed video! it helps me a lot! :)

  • @joywritr
    @joywritr 10 месяцев назад

    Thanks for these videos, you explain them well. I'm fairly new to Comfy (coming from A1111) and primarily want to run my own digital art through AI, so these tutorials are helpful. :)

  • @dariayudina8463
    @dariayudina8463 6 месяцев назад

    that is soooo cool, great job man!

  • @구원자님
    @구원자님 8 месяцев назад

    I love you and your tutorials, so much man! 💛👏🏻

  • @tzgaming207
    @tzgaming207 9 месяцев назад

    this is pretty much what pushed me to switch to comfy ui, thank you! trying to get latent couple & composable lora masking to work correctly in a1111 or forge was driving me nuts 😂

  • @DaBonQ
    @DaBonQ 11 месяцев назад +2

    What would be the best way to create a sketch image like you using it? Peraps even directly within ComfyUI? Thanks for you video!

  • @rickyvonicky4371
    @rickyvonicky4371 7 месяцев назад

    amazing !

  • @aliyilmaz852
    @aliyilmaz852 10 месяцев назад

    Thank you so much Olivio! It is the first time I understand how img2img works.
    Loaded images should have the size that we want to generate?(e.g 512x768)

  • @TheRMartz12
    @TheRMartz12 9 месяцев назад

    amazing

  • @wangzip321
    @wangzip321 5 месяцев назад +2

    I keep producing very blurred images for unknown reasons with the exactly same workflow, I've tried switch different checkpoints and vaes. Anyone has any ideas?

    • @Acid_Ash
      @Acid_Ash 4 месяца назад

      This happened to me too. Increase your steps in the sampler. Also increase the cfg and that should iron things out. It will take much longer the greater the increase but the clearer it will be.

  • @typingcat
    @typingcat 3 месяца назад

    It has the "Load VAE" node. What VAE should I use and where can I get it? I see there are some VAE in civitai (a lot less than regular models), but I am not sure what to use.

  • @FuZZbaLLbee
    @FuZZbaLLbee 11 месяцев назад +1

    I use a Lora trained on my own face. It seems that the prompt has to be almost the same as the example prompt, provided by the Lora generator. To have the recognizable face.
    When I use control-net, the face becomes different, and doesn’t look like me anymore
    Is there a solution for this?

  • @netron66
    @netron66 6 месяцев назад

    Does the vae be different if im using different model?

  • @claraalmeida1775
    @claraalmeida1775 11 месяцев назад

    Could you put 2 images, 1 for the character and 1 for the background, then blend them to make the final image?

  • @whodat861
    @whodat861 11 месяцев назад

    @OlivioSarikas After looking in all the nodes, I was not able to find LatentBlend. I do see that you have it in your example workflow. Would it be under Post Processing?

  • @luoianmon5802
    @luoianmon5802 9 месяцев назад

    Hi, thanks a lot for your tutorial, but when I run this lesson workflow in cloud, it shows below error:
    Prompt outputs failed validation
    LoadImage:
    - Custom validation failed for node: image - Invalid image file: 04 (3).jpg
    How to fix this?

  • @petpo-ev1yd
    @petpo-ev1yd 9 месяцев назад

    Hi,I try to inpaint some place on a photo, but I don't want any changes on other place,How should I do?

  • @nuejidub
    @nuejidub 11 месяцев назад

    the latent image method is interesting, but doesn't it lack control compared to open pose or canny?

  • @JM-yn2lw
    @JM-yn2lw 7 месяцев назад

    I know this is probably the most basic of questions but, What does the () do in a prompt, assign detail to the prior word?

    • @AyuK-jm1qo
      @AyuK-jm1qo 5 месяцев назад

      @@JM-yn2lw it makes the word more important and forces it to add it

  • @keylanoslokj1806
    @keylanoslokj1806 11 месяцев назад

    Do you run on the cloud or your own pc?

  • @SumNumber
    @SumNumber 9 месяцев назад +1

    Cool. I found that if you drop your output image back into the image loader and render again, adjusting denoise, you can get some really great results and repeating that process over and over you can get some very interesting results. :O)

  • @Inugamiz
    @Inugamiz 11 месяцев назад

    Maybe with this "course" I can give ComfyUI another run.

  • @AMdvij
    @AMdvij 11 месяцев назад +5

    Stop generating people and portraits, do something more complicated)

    • @aljosacebokli
      @aljosacebokli 11 месяцев назад

      Agreed, but also thanks so much Olivio these have been really amazing as a first dip into stable diffusion for me.

    • @tantarantaran
      @tantarantaran 10 месяцев назад +2

      "often the models are trained in a way that it's always doing the same thing, everything is centered" then goes ahead and generates a centered image of another pretty girl.

    • @RompinDonkey-bv8qe
      @RompinDonkey-bv8qe 10 месяцев назад +1

      People complaining about free content. Hey, go pay for it then you can tell the guy what to do eh?

  • @sickvr7680
    @sickvr7680 Год назад

    Gracias Olivioooooooo

  • @구원자님
    @구원자님 8 месяцев назад

    I got a nsfw result