New SDXL ControlNet Models are Here!🤯 (Mind Blowing Results)

Поделиться
HTML-код
  • Опубликовано: 7 сен 2024
  • 📢 Last Chance: 40% Off "Ultimate Guide to AI Digital Model on Stable Diffusion ComfyUI (for Begginers)" use code: AICONOMIST40
    🎓 Start Learning Now: rebrand.ly/AI-...
    --------
    🌟 All Ressources & Workflow for Free: rebrand.ly/New...
    Learn how to use the latest and greatest Stable Diffusion XL ControlNet models in ComfyUI! This step-by-step tutorial covers everything from downloading and installing the new SDXL ControlNet models to generating incredible AI images with Canny, Depth, and OpenPose. We'll explore how to use these powerful tools to control your AI's output, fix anatomy, upscale images, and even change clothes on a model, all within ComfyUI. Whether you're a beginner or an experienced AI artist, this video will show you the power of the best new SDXL ControlNet models for next-level image generation.
    𝕏 Follow me : X.com/Aiconomist1
    💼 For Business: aiconomist@pixelailabs.com

Комментарии • 50

  • @Aiconomist
    @Aiconomist  15 дней назад

    📢 Last chance to get 40% OFF my AI Digital Model for Beginners COURSE: aiconomist.gumroad.com/l/ai-model-course

  • @timothywells8589
    @timothywells8589 2 месяца назад +14

    Sorry to hear about your health problems, wishing you good luck and a speedy recovery! Can't wait for the course I think this is exactly what I've been looking for for over a year now, everything in one place rarther than a missmash of different RUclips tutorials. These controlnets look great! Only problem is I was prepping images over the weekend to get ready to make my first Lora now I think I'll have to restart from scratch again to see if these new models and your excellent guidence can improve the results, thanks so much 😊

  • @madarauchiha5433
    @madarauchiha5433 15 дней назад

    Hey Man, awsome stuff, would love a workflow even paid for comfy ui specially for products, as in using your own product shots and compose them in different environments

  • @baheth3elmy16
    @baheth3elmy16 2 месяца назад +1

    I'm very sorry to hear about your health situation. I hope you will get better soon. Great video by the way, the presentation was perfect, and the music was so soothing.

  • @maxdeniel
    @maxdeniel 18 дней назад

    Excellent video my friend, I had NO problems by downloading and running the Depth Anything node as well as the Canny Edge node, both run great.
    BUT when I tried to run the DWPose Estimator, an error pops up:
    curred when executing DWPreprocessor:
    'NoneType' object has no attribute 'get_providers'
    And actually I got the same error when truing to run the DWPreprocessor

  • @ShubzGhuman
    @ShubzGhuman Месяц назад +2

    my depth anything doesnt work, can you create a video on that how to install ,and where to place required files.

  • @lefourbe5596
    @lefourbe5596 Месяц назад +1

    aaaaaaaand NOW there is Xinsir UNION model that does everything XD !
    this was relevant for 5 days ! great video btw

  • @SanjeevPenupala-7T
    @SanjeevPenupala-7T 2 месяца назад

    Thanks for making this video! It's really cool to see how fast this technology is growing. I'll be sure to try out these new controlnets in my future workflows! And I hope you get better soon! You are the only RUclipsr I found that gives in-depth explanations for workflows and provides the workflows to play around with, so thank you for that!

    • @Aiconomist
      @Aiconomist  2 месяца назад

      Thank you! It's great to know I could help.

  • @ceegeevibes1335
    @ceegeevibes1335 2 месяца назад

    I get best results by loading these models as diffusion models ( diff controlnet loader ) this loader has to be connected to ( diffrensial diffusion node ) that is connected to your checkpoint model output, ( I figure this is how they will be able to work TOGETHER with the checkpoint to actually work properly ) - im not 100% sure technically but I'm getting extremely good results,
    even if im wrong: i would encourage yo'll to test this variation of workflow and judge yourselves. Have a great Comfy DAY!

  • @oncelife7499
    @oncelife7499 2 месяца назад +2

    thank you
    I'm going to watch a good video today and learn from it.
    Thank you always~

  • @DaKussh
    @DaKussh 2 месяца назад +1

    What have I been using then? I swear I was using control net poses with SDXL for the past year already

  • @franlp32
    @franlp32 2 месяца назад

    You can also stack multiple controlnets to get even better results.

  • @ysy69
    @ysy69 2 месяца назад

    My healing prayers to you! Thanks for making and sharing this. wonder how does these compared with MistoLine's SDXL-ControlNet

  • @TheOfficialGusS
    @TheOfficialGusS 2 месяца назад

    Thanks a lot for those amazing videos, wishing you good luck and a speedy recovery!

    • @Aiconomist
      @Aiconomist  2 месяца назад

      Thank you very much!

  • @sudabadri7051
    @sudabadri7051 2 месяца назад

    Great work as always even with your health problems you are delivering ❤

  • @aouyiu
    @aouyiu 2 месяца назад

    12:00 how did changing the seed value by 1 tell it to generate 4 images instead of 1? 🤔

  • @xteasy1045
    @xteasy1045 Месяц назад

    Why do I pop up such a prompt with the same connection method as you?
    Error occurred when executing KSampler:
    'NoneType' object has no attribute 'shape'

  • @ghilesbardi
    @ghilesbardi Месяц назад

    can we use it on A1111 ?

  • @noonesbiznass5389
    @noonesbiznass5389 2 месяца назад

    Love your vids, thanks for the helpful insights.

  • @RahulGupta-ub1op
    @RahulGupta-ub1op Месяц назад

    Is it possible to change a person's pose in an image while keeping their original body structure intact?

  • @charles2353
    @charles2353 2 месяца назад

    God fucking damnit, I was waiting for this for SO LONG, and today, my ComfyUI install broke during a system crash... Now I have to re-install EVERYTHING just to get ControlNet... ARghhhh.....

  • @nacho8049
    @nacho8049 2 месяца назад

    Amazing video as usual. Thanks a lot!

  • @gaming_one1846
    @gaming_one1846 2 месяца назад +2

    can these models be used in 1111 ?

  • @stefanvozd
    @stefanvozd 2 месяца назад

    great video, and hope you get better soon!

  • @killbadmashia9225
    @killbadmashia9225 2 месяца назад +1

    how do you apply the depth model to an existing image ?

    • @ceegeevibes1335
      @ceegeevibes1335 2 месяца назад

      OP is showing this in the video timestamp: 4:15

    • @killbadmashia9225
      @killbadmashia9225 2 месяца назад

      @@ceegeevibes1335 I meant applying depth from one image to another image i.e a different person. I know you can swap face using reactor node but that does not yield the best result for me.

  • @rodrigop.9071
    @rodrigop.9071 2 месяца назад

    Do you know if it is possible to use the models in Forge?

  • @quotesspace1713
    @quotesspace1713 2 месяца назад

    nice tutorial as always thanks 🙏🙏
    What's the plugin you used for the "Generate on Cloud GPU"?

    • @Aiconomist
      @Aiconomist  2 месяца назад +1

      It's Comfy Cloud by nathannlu

  • @neelicious
    @neelicious 2 месяца назад

    Sending prayers and love. 👩🏾‍⚕️🤒🩹 😊

  • @salomahal7287
    @salomahal7287 2 месяца назад

    hey thanks for the update, got a question though, there are 2 canny models avaliable the normal one and the V2 one, do u know the difference, they both got the same filesize...

    • @Aiconomist
      @Aiconomist  2 месяца назад +1

      I haven't tested the V2 model yet, but I don't think there will be a big difference between the two.

    • @ceegeevibes1335
      @ceegeevibes1335 2 месяца назад

      @@Aiconomist v2 looks much better

  • @Rubberglass
    @Rubberglass 2 месяца назад

    Yes! 😮

  • @putrareverie
    @putrareverie 2 месяца назад

    I want to dive deeper into comfy.ui. Is 6GB VRAM RTX 3050 Laptop GPU enough to run Comfyui? I see there is a I want to delve deeper into comfy.ui. Is 6GB VRAM RTX 3050 Laptop GPU enough to run Comfyui? I see there is a text "generate on cloud". Is it worth it? Will it be charged every time we generate an image? Or pay by the hour? thanks. hope you get well soon.

  • @danielodeniyi8729
    @danielodeniyi8729 2 месяца назад

    What about a Lineart model

    • @Aiconomist
      @Aiconomist  2 месяца назад

      I think Xinsir will train this model; we should just give it some time..

  • @henroc481
    @henroc481 2 месяца назад +7

    There aren’t new

    • @reaperhammer
      @reaperhammer 2 месяца назад +2

      The HF page says this is the SOTA version of open source openpose models... so while this isnt totally new like video makes it sound... its just some better trained models or something... clickbait strikes again

  • @Vanced2Dua
    @Vanced2Dua 2 месяца назад

    Please, SD 1.5

    • @Aiconomist
      @Aiconomist  2 месяца назад +1

      I personally think SD 1.5 already has well trained ControlNet models.

  • @kevin.feng1
    @kevin.feng1 2 месяца назад

    Hey brother, I sent you an email. Great videos :)

  • @OmniEngine
    @OmniEngine 2 месяца назад

    My RUclips channel thanks you.