Stable Diffusion - Face + Pose + Clothing - NO training required!

Поделиться
HTML-код
  • Опубликовано: 13 окт 2023
  • Building on my Reposer workflow, Reposer Plus for Stable Diffusion now has a supporing image, allowing you to incorporate items from that image into your AI generations! Find a nice jacket, find a pose, pick a face and in just seconds your character has both a BODY and an OUTFIT!
    No training, no roop, no visual studio bloatware - just rock with the images you’ve got!
    Note: This video shows the original IP Adapter, as does the workflow. IP adapter nodes from the future are different, so see the other reposer workflow for an example of using other, newer nodes. This one will remain unchanged so you get the best of both worlds.
    Available for FREE from the AVeryComfyNerd web page -
    github.com/nerdyrodent/AVeryC...
    Reposer Installation Guide -
    • Reposer = Consistent S...
    How to install ComfyUI:
    • How to Install ComfyUI...
    == More Stable Diffusion Stuff! ==
    * ComfyUI Zero to Hero! -
    • ComfyUI Tutorials and ...
    * ControlNet Extension - github.com/Mikubill/sd-webui-...
    * How do I create an animated SD avatar? - • Create your own animat...
    * Installing Anaconda for MS Windows Beginners - • Anaconda - Python Inst...
    * Dreambooth Playlist - • Stable Diffusion Dream...
    * Textual Inversion Playlist - • Stable Diffusion Textu...
  • НаукаНаука

Комментарии • 347

  • @Saimsboy
    @Saimsboy 7 месяцев назад +30

    You are amazing.
    You are achieving what many of us are trying to do; "Consistency in character creation."
    Thank you for sharing your progress with us.

    • @benjamininkorea7016
      @benjamininkorea7016 7 месяцев назад +6

      Yep, this is the Holy Grail of AI design this year-- consistency.

    • @NerdyRodent
      @NerdyRodent  7 месяцев назад +6

      You are so welcome!

  • @jibcot8541
    @jibcot8541 7 месяцев назад +8

    This is freaking amazing, what a time to be alive!

  • @tomwojcik
    @tomwojcik 7 месяцев назад +2

    This is beyond crazy. I feel like all these tools have just been created recently and they are already THIS powerful. Just crazy.
    Subscribed. Great content.

    • @NerdyRodent
      @NerdyRodent  7 месяцев назад +1

      Should see more fun in the future too!

  • @raphaellfms
    @raphaellfms 7 месяцев назад +1

    Top tier content! And I didn’t even reach the end of the video! Please keep up the good job!

  • @juanjesusligero391
    @juanjesusligero391 7 месяцев назад +1

    This is too powerful! You always surprise me with your amazing ideas. Thank you so much for making and sharing these tutorials! :D

  • @Atrasees
    @Atrasees 7 месяцев назад +2

    Backgrounds are the next logical step, yeah? Thanks for the awesome workflow!

  • @jacekfr3252
    @jacekfr3252 7 месяцев назад +5

    omg, this is so extensive and so well made, thank you for sharing this

  • @felipealmeida5880
    @felipealmeida5880 7 месяцев назад +1

    Unbelievable, I've never seen anything like this, you don't even need a supporting image yet if you use face restoration with ReActor it's just perfect, thank you very much.

  • @blackvx
    @blackvx 7 месяцев назад

    Powerful stuff! I liked the variations of the dragon t-shirt. Thanks!

  • @classacre
    @classacre 7 месяцев назад +4

    Damn, thjs is absolutely phenomenal for story telling. I've been searching for a workflow / method to get consistent characters in cinsistent clothing in a pose and this is just perfect. The only thing that would make this better would be the ability to add multiple characters on the same image, each character having their own consistent clothing. This would be revolutionary for using AI image generation for storytelling.

    • @tstone9151
      @tstone9151 5 месяцев назад

      I'm trying to figure that out as well, my current workflow for this is messy but gets the job done. In short, you have to do a lot of messy compositing, and do a final pass using img2img.

  • @pincludestudio5562
    @pincludestudio5562 7 месяцев назад +4

    You're an absolute magician. Thank you for your effort sir.

  • @Powerlevelover9000
    @Powerlevelover9000 7 месяцев назад +4

    Thanks a lot , your videos are always so helpful.

    • @NerdyRodent
      @NerdyRodent  7 месяцев назад +1

      Glad to hear that! Thanks for watching 😃

  • @c0nsumption
    @c0nsumption 7 месяцев назад +1

    Dude, this is Fn incredible. Will be diving in after work!!!!

    • @NerdyRodent
      @NerdyRodent  7 месяцев назад +1

      Glad you like it! 🤓

  • @johnmcaleer6917
    @johnmcaleer6917 7 месяцев назад

    Ip adapter changed everything for me and you have made Ip adapter even more useful...Thanks so much..

  • @DoorknobHead
    @DoorknobHead 7 месяцев назад +3

    The New Stable Nerdy+ Diffusion. Genius.

  • @hleet
    @hleet 7 месяцев назад +3

    WOW ! I have watched before that the Ai-trepreneur video about lora clothing that is absolutly complicated ! ... Thanks a lot for that information, I will try that :)

    • @NerdyRodent
      @NerdyRodent  7 месяцев назад +1

      Check my twitter for other examples. Food makes a great jacket too 😉

  • @tahookideveloper3338
    @tahookideveloper3338 7 месяцев назад

    very very amazing video. love it.
    I will try to create a piece by following your video.
    Thank you !

  • @noobplayer-jc9hy
    @noobplayer-jc9hy 7 месяцев назад +3

    ❤❤❤you are simply great ,you deserve a lot of subs

    • @NerdyRodent
      @NerdyRodent  7 месяцев назад

      That’ll never happen, but thanks! 😆

  • @jacekfr3252
    @jacekfr3252 7 месяцев назад +2

    Yeaaaa! Thanx! Was waiting for this!

  • @art3112
    @art3112 7 месяцев назад +4

    Excellent work!

  • @alexgilseg
    @alexgilseg 4 месяца назад

    This is nothing but amazing! I'm gonna buy you a few cups of coffee that's for sure! I've been waiting for this since forever.. Will there be an XL version of clothes, face and pose ?

  • @stephantual
    @stephantual 3 месяца назад

    Looks nice and clean.

  • @user-pb2zx3mc7p
    @user-pb2zx3mc7p 6 месяцев назад +3

    This is amazing! Any chance you have an updated version for SDXL?

  • @stevennie6010
    @stevennie6010 5 месяцев назад

    WOW THIS IS AMAZING!

  • @jimdelsol1941
    @jimdelsol1941 7 месяцев назад +1

    Thank you very much again !

  • @alexmehler6765
    @alexmehler6765 5 месяцев назад +2

    this is amazing man ! the only thing missing is modifying the facial expression.

  • @darkart4fun
    @darkart4fun 7 месяцев назад

    This is SOOO GOOOD!

  • @blacksage81
    @blacksage81 6 месяцев назад +1

    So I finally dug my PC out of storage and got the SDXL reposer workflow, and after some searching I realize that there isnt a video covering that wonderful workflow, and what better I got it working on my 12gb gpu, which was a surprise as with XL workflows things can easily get out of hand vram wise. I am shocked by that workflow.

  • @SaadKhanAhmed
    @SaadKhanAhmed 5 месяцев назад

    This is awesome! Thanks for the amazing workflow.
    I have one question, I am trying to have the image generated in comic style. Since we are using SD 1.5 and can't just rely on prompt like "comic book style", what is my option to do so staying on SD 1.5? I tried introducing Lora as well as adding positive prompt with models like DreamShaper etc. but it seems like prompt does not have a lot of weight, OR simply isn't going to work. Any idea?

  • @graysongreen8011
    @graysongreen8011 7 месяцев назад +3

    Hey love the hard work you put in for this! Im getting a LONG Ksampler error, wondering if you could help?
    Error occurred when executing KSampler:
    'NoneType' object has no attribute 'shape'
    comfyui is up to date, I dont have fooocus ksampler installed... any thoughts?

  • @EmmaFitzgerald-dp4re
    @EmmaFitzgerald-dp4re 6 месяцев назад

    Awesome! Thanks for this, qq, how do you increase the batch number?

  • @Kelticfury
    @Kelticfury 7 месяцев назад +1

    You are a bloody legend

  • @r.m3751
    @r.m3751 7 месяцев назад +2

    Amazing!! but can we apply this if we have different characters with different poses in one image. ??

  • @nikgrid
    @nikgrid 7 месяцев назад +1

    Nerdy I bypassed the DW openpose processor and used openpose skeleton for pose reference and it worked beautifully and wasn't influenced because the skeleton has no clothes.

  • @CosmicLloyd
    @CosmicLloyd 7 месяцев назад

    So AWESOME! Thank you so much! This combined with a way to have a consistent background so that multiple characters can be in the same scene would be basically all we need to create comic books and maybe even A.I. movies! Is there a way to have a consistent background for multiple characters?

    • @NerdyRodent
      @NerdyRodent  7 месяцев назад +1

      For that I’d probably use cut-out characters tbh, but maybe!

  • @orionrobinette2691
    @orionrobinette2691 3 месяца назад

    So I got this working, with NO errors! And it does very well looking at the pose and face but it seems to have a hard time with outfit. It will slightly take hints from the outfit image but it will fully change color, add bits, change how it fits, sometimes just change the outfit fully.
    Do I need a model? change a weight? How can I tweak this so that it listens to the clothing image a bit more? I have tried using the input also to help but I am not getting much success with outfits. Any tips?
    Thank you for the video!

  • @KriGeta
    @KriGeta 6 месяцев назад

    may you create a tutorial specially for anime characters? like using an arm of a character and a face of another and the other body part of other character then pose it into a hard poses, like foreshortening?

  • @Some1uNo
    @Some1uNo 7 месяцев назад +2

    Pure Gold

  • @user-jo7bi5oh4n
    @user-jo7bi5oh4n 6 месяцев назад +2

    Thank you for sharing your work! I am getting an "Error occurred when executing ArithmeticBlend: The size of tensor a (3) must match the size of tensor b (6) at non-singleton dimension 0". The exact same error was posted in github discussions recently, which makes me think that a recent update to one of custom nodes broke something, maybe?

  • @360VIDEOVIBES
    @360VIDEOVIBES 7 месяцев назад +1

    Thanks for sharing this amazing workflow.
    Where is better to add the Reactor node for face swap as I can not get the same exact face for realistic images?

    • @NerdyRodent
      @NerdyRodent  7 месяцев назад +1

      You can add it in before or after the open pose control net. Setting face weight to 1 or more will make the generation more like the face image.

  • @autonomousreviews2521
    @autonomousreviews2521 7 месяцев назад

    Fantastic :) What a ride!

  • @theawesome2902
    @theawesome2902 7 месяцев назад

    im getting error. and cant see final image. error is 'NoneType' object has no attribute 'shape'

  • @gagrevolver
    @gagrevolver 5 месяцев назад

    Hey Rodent, just dropped you a dm on your patreon earlier today. Looking forward to your assistance getting this workflow running smoothly!

  • @r.m3751
    @r.m3751 7 месяцев назад +1

    Can we apply this if we have different characters with different poses in one image? please can anyone tell me

  • @T3MEDIACOM
    @T3MEDIACOM 7 месяцев назад +1

    I would like to have your instant lora.... take the completed image and place it into this workflow. Is there a way to combine them?

  • @haljordan1575
    @haljordan1575 6 месяцев назад +2

    WHAT ABOUT PERSPECTIVE? It'd be nice if it could mimic the camera angle as well?

  • @pressrender_
    @pressrender_ 2 месяца назад

    Hey Nerdy, thanks for helping us to learn more. I'm using this workflow a lot, but unfortunately, I tried to use it this week and have this error: SyntaxError: Unexpected non-whitespace character after JSON at position 4 (line 1 column 5). What could it be? Thanks a lot.

  • @JeanKentHome
    @JeanKentHome 6 месяцев назад

    The images I'm generating all have slightly scrunched/shorter legs than they should. It might be because my clothing reference just focuses on the top of the body? I don't want to change my image input--is there a clever alteration that can just make the legs look more normal?

  • @tianz4710
    @tianz4710 7 месяцев назад +1

    Great job. Could you explore using ipadapter to stylize images? Like turning real photo into rick and Morty style?

  • @DanielThiele
    @DanielThiele Месяц назад

    Dear Mr. Rodent. I looked into your workflows for the new reposer plus workflow, but I only see poser, poser 2 with updates from last week. The only other one is reposer plus with bypass image option and that is still from 4 months ago.
    I am replacing the ip adapter apply with ipadapter advanced notes as I am writing, assuming this will do the trick.

  • @birn
    @birn 7 месяцев назад

    Thanks. Your video brought me back to comfy. One question, I've noticed Preview Image Final will have bits and pieces of the Supporting Image I can't get rid of despite the thresholds I set. Is there a way to use the MaskEditor to remove those bits before it gets sent to the final Reposer image?

  • @hfoxhaxfox1841
    @hfoxhaxfox1841 7 месяцев назад +1

    My output looks a bit different from the face image, there is only a bit of similarity in the eyes, and that's it

  • @Moedow
    @Moedow 4 месяца назад

    You sir got my sub!

  • @raphaellfms
    @raphaellfms 7 месяцев назад +2

    Can you please make a tutorial on how to apply a style to an image? Something like: grab a photo portrait and make it a 3d cartoon or anime style?

    • @NerdyRodent
      @NerdyRodent  7 месяцев назад +2

      You could do it using this 😉

  • @happyme7055
    @happyme7055 7 месяцев назад +3

    Just *WOW* :-) But... i am using A1111 - and tried to setup a cumfyui installation like this. Hell, i missed :/ any how to out there?

    • @NerdyRodent
      @NerdyRodent  7 месяцев назад +3

      This would all be a lot of manual steps in automatic 1111

  • @roman-tdv
    @roman-tdv 7 месяцев назад

    It would be cool to see someone creating a demo of it to use it on Hugging Face.

  • @jadosfrombarbados
    @jadosfrombarbados 4 месяца назад

    Hi!
    I have a question/challenge. I have been trying to recreate this ability, but purely for a face. So, I have used controlnet for generating the faces in the same pose (being a head and shoulders, facing forward pose), but then, I want the ability to add glasses, hats, earrings, etc., and have them be the SAME every time. I would then want to extend this to hair, face shapes etc., so that I could have 10 different faces, all wearing the same pair of glasses, or have the same face, showcasing 10 different types of glasses. Is this possible? Believe me, I have been trying...
    Thanks!

  • @haggler40
    @haggler40 6 месяцев назад

    Any idea about this error, something stuck at ImageCompositeMask "Error occurred when executing ImageCompositeMasked: tuple index out of range"

  • @lilillllii246
    @lilillllii246 6 месяцев назад

    How can I make it look exactly the same when the clothes are slightly different?

  • @hunjo8463
    @hunjo8463 6 месяцев назад

    thank u so much.my english is short. but i really want say "thank u!!!!!!!!!!!!!!"

  • @Paulo-ut1li
    @Paulo-ut1li 7 месяцев назад +4

    That's genius! For ip-adapter plus face, fp32 gave me better results. Is there a way to add two ip-adapters, one for the front and another for the style, like a painting style or comic? It would be awesome. Another question: is it better to use transparent png regarding faces? Thank you!

    • @NerdyRodent
      @NerdyRodent  7 месяцев назад +1

      Yup, you can keep chaining IPAdapter like in this one

    • @KINGLIFERISM
      @KINGLIFERISM 7 месяцев назад

      not sure where do I install these?
      the controlnets I think it in the right spot just wanted to confirm

    • @NerdyRodent
      @NerdyRodent  7 месяцев назад

      @@KINGLIFERISM you can check the installation video for complete and detailed installation instructions

  • @Spex84
    @Spex84 7 месяцев назад

    Awesome!
    I'm gonna be that guy--is it at all possible to run IPadapter on 6Gb Vram?
    I gave your previous Reposer workflow a whirl recently and after resolving some missing nodes and Torch/pip upgrade errors, immediately ran out of Vram. Darn it.

    • @NerdyRodent
      @NerdyRodent  7 месяцев назад

      Could be pushing it a bit, but maybe with all your low vram settings and such!

  • @jasonstetsonofficial
    @jasonstetsonofficial 7 месяцев назад +1

    Exactly!!!

  • @saleem801
    @saleem801 6 месяцев назад

    I get import errors for two required models: ComfyUI-Allor and comfyui-art-venture

  • @kpr2
    @kpr2 6 месяцев назад

    This is really nifty, Rodent, and I certainly appreciate it, but I'm getting very inconsistent results from the poses. I've tried all sorts of reference images but four times out of five it ignores the reference pics and just does what it wants to. Not sure what I might do to improve the output off the top of my head (though I am going to fiddle with values & see where that gets me), so I thought I'd come see if you might have any suggestions. Again, it *does* pose the character correctly on occasion, but only once in awhile.

    • @NerdyRodent
      @NerdyRodent  6 месяцев назад

      A couple of ways to ignore the pose would be to use a non-human pose image, which I’ve done to create some interesting creatures, or simply lower the pose controller strength. The reverse is, of course to use clear human images and a pose controller strength of one.

    • @kpr2
      @kpr2 6 месяцев назад

      Thanks :) I'm not trying to ignore the pose, I'm trying to get it to conform to it (and I am using a humanoid character & pose references). I'll see if I can set it up with better reference images & see where it gets me. Much appreciated! @@NerdyRodent

  • @OmarZambranoplus
    @OmarZambranoplus 7 месяцев назад

    Maravilloso

  • @rogerh6702
    @rogerh6702 7 месяцев назад +1

    Can you please list which custom nodes you use. When I try the work flow I always have missing nodes. Thanks!

    • @NerdyRodent
      @NerdyRodent  7 месяцев назад

      I use ComfyUI manager, making a single click to install all missing nodes! I’ve also added the full list of used and unused custom nodes.

  • @SolveForX
    @SolveForX 6 месяцев назад

    How would we engine camera angles?

  • @DJBFilmz
    @DJBFilmz 7 месяцев назад +1

    Is this possible in A1111 or Foooocus? Or should I just bite the bullet and learn ComfyUI?

    • @hfoxhaxfox1841
      @hfoxhaxfox1841 7 месяцев назад +2

      you don't need to learn comfiUI to use it, just need to import the nodes following the tutorial and voila

  • @Ahmed211983
    @Ahmed211983 7 месяцев назад +1

    wow! Cool!

  • @brentperry6974
    @brentperry6974 7 месяцев назад +1

    So many red nodes listed as undefined and "install missing" says everything is loaded. unfortunately I just started playing with comfyUI today but have used auto 1111 for months

    • @NerdyRodent
      @NerdyRodent  7 месяцев назад

      Use ComfyUI Manager to install missing custom nodes.
      Be sure to keep ComfyUI updated regularly - including all custom nodes.

  • @jurandfantom
    @jurandfantom 7 месяцев назад

    Just starting with CUI, anybody have suggestion how cleanup such node situation? Managed to find one custom node that connect selected one into one, but I was thinking about path as well - noticed that some people have one that use only 90* bends and straight lines

    • @NerdyRodent
      @NerdyRodent  7 месяцев назад

      It’s entirely up to your own personal settings, if you want to have wires and how they bend

  • @vtchiew5937
    @vtchiew5937 7 месяцев назад +1

    thanks for the reposer plus workflow, I had trouble getting it working as I'm stuck at the Segment Anything nodes being all red (checked from the manager that it has been installed, tried reinstalling but to no avail), is there something that I am missing?

    • @NerdyRodent
      @NerdyRodent  7 месяцев назад

      Use ComfyUI Manager to install missing custom nodes.
      Be sure to keep ComfyUI updated regularly - including all custom nodes.

    • @vtchiew5937
      @vtchiew5937 7 месяцев назад

      @@NerdyRodent I actually did a clean installation, and installed all custom nodes indicated by ComfyUI manager, and it's the Segment Anything nodes that are red, while the rest are okay, so I was wondering if it was due to a version conflict (which I need to manually install a particular version).

    • @NerdyRodent
      @NerdyRodent  7 месяцев назад

      @@vtchiew5937 you can install it via the normal install in manager if somehow install missing fails. I’ve added a full list of both used and unused custom nodes.

  • @towakona
    @towakona 6 месяцев назад

    Where do I press to generate the image?

  • @Daniel-D-Teach
    @Daniel-D-Teach 13 дней назад

    Thank you so much for the cool content. did you perhaps update this workflow to work with the new ipadapter? the v2 arn't backwards compatible

    • @NerdyRodent
      @NerdyRodent  13 дней назад +1

      There are a bunch of updates available via Patreon 😉

    • @Daniel-D-Teach
      @Daniel-D-Teach 13 дней назад

      @@NerdyRodent thanks! I'll make sure to check it out. thank you again for your great work!

  • @harshitpruthi4022
    @harshitpruthi4022 7 месяцев назад

    Error occurred when executing IPAdapter:
    'ClipVisionModel' object has no attribute 'processor'
    can someone please help me with this error

  • @rukaiko
    @rukaiko 7 месяцев назад

    Am I the only one getting this error? "Currently DWPose doesn't support CUDA out-of-the-box". It gives me a grey image T.T

  • @seans4018
    @seans4018 7 месяцев назад

    This is a very exciting pipeline! However, I constantly see very basic poses in many AI images. Is it possible to do more dynamic posing?

    • @JustMaier
      @JustMaier 7 месяцев назад +1

      Probably with the help of controlnet

    • @NerdyRodent
      @NerdyRodent  7 месяцев назад

      Yup. Add any custom nodes you like and let me know what you create! ;)

  • @pack9694
    @pack9694 4 месяца назад

    Is there any way to change facial expression like angry, yelling, etc.?

  • @MrNorBro
    @MrNorBro 7 месяцев назад

    This looks great! I wanted to try it out, but I encountered a problem with the 'segment anything' module. I attempted to install it using the manager, but even after installation, it still gave me errors, for some reason comfyui doesn't recognise it or so! I tried to bypass it by removing nodes but than the output looked bad ( : ... Im relatively new in comfyui .. Could you please make one workflow without the 'segment anything' module?

    • @NerdyRodent
      @NerdyRodent  7 месяцев назад

      The original Reposer doesn’t have segment anything, so you can use that 😉

    • @MrNorBro
      @MrNorBro 7 месяцев назад

      @@NerdyRodent Sure, and that works , good job by the way ! But in that workflow it doesnt have the supporting img window for the Outfit ( :

  • @mrschneebly85
    @mrschneebly85 4 месяца назад

    It would help if you tell us wich ComfyUI workflow from your github you use in the video. I am massively confused :D I cant find the correct workflow. They all look different in your video.

    • @NerdyRodent
      @NerdyRodent  4 месяца назад +1

      Feel free to drop me a dm on patreon if you need more help! 😀

  • @HeinleinShinobu
    @HeinleinShinobu 7 месяцев назад

    is this only available in comfy ui?

  • @akirathompson5914
    @akirathompson5914 2 дня назад

    is it ok to use the OpenPose character rig thing (multi-colored bone structure poseable rig over black background) as the input for the pose here? or does it have to be a photo of a person?

  • @baptiste6436
    @baptiste6436 7 месяцев назад

    Ok this is crazy, I think that's all we need for complete designing of graphic novels with consistent characters. I need to figure out how I can use the API to achieve that

    • @NerdyRodent
      @NerdyRodent  7 месяцев назад

      :D

    • @KINGLIFERISM
      @KINGLIFERISM 7 месяцев назад

      There is a comic book generator on hugging face already you know.

    • @baptiste6436
      @baptiste6436 7 месяцев назад

      right but there is no consistency or characters selection (yet)@@KINGLIFERISM

  • @crow-mag4827
    @crow-mag4827 6 месяцев назад

    Ive been working for hours and keep getting nothing but errors...ArithmeticBlend errors and then IP adapter errors 'proj.weight'. running 4080 so i have plenty of power. not sure whats up.

    • @NerdyRodent
      @NerdyRodent  6 месяцев назад

      I’d start by working through each of the troubleshooting steps. Likely it’s just needed to be updated.

  • @OneHandStudio
    @OneHandStudio 7 месяцев назад +1

    i cannot fix the " Error occurred when executing DWPreprocessor" even after updating everything. Anyone like me who found a solution?

    • @NerdyRodent
      @NerdyRodent  7 месяцев назад

      Click “update all” in the manager

  • @simonhick9124
    @simonhick9124 4 месяца назад

    Hello, ive been having troubles, Do you know where is the NNLatentUpscale node?

    • @NerdyRodent
      @NerdyRodent  4 месяца назад

      The easiest way is to use ComfyUI manager. You can also dm me on patreon if you need more help!

  • @simonhick9124
    @simonhick9124 4 месяца назад +1

    SyntaxError: Unexpected non-whitespace character after JSON at position 4 (line 1 column 5)

  • @damiangabrys1648
    @damiangabrys1648 6 месяцев назад

    Amazing. I was trying to get it running on my end, but I keep getting "CUDA_PATH is set but CUDA wasn't able to be loaded" error message. Has anyone else encountered this problem?

    • @NerdyRodent
      @NerdyRodent  6 месяцев назад

      Sounds like you need to reinstall your Nvidia stuff?

  • @rogerdupont8348
    @rogerdupont8348 6 месяцев назад

    Hello Nerdy Rodent.
    Thanks for your work ! I'm trying to create a comic book too. However I'm stuck after trying very hard to make you Reposer_Plus_BG work.
    I've got this error and I couldn't fix it :
    Error occurred when executing IPAdapter:
    'NoneType' object has no attribute 'patcher'
    Could you help me please ? That would appreciate it a lot.

    • @NerdyRodent
      @NerdyRodent  6 месяцев назад

      NoneType means the node can’t load the file you’re asking it to load

  • @odev6764
    @odev6764 5 месяцев назад

    Is there any workflow for SDXL ? I'm trying with SD1.5 but it is too much stubborn. I gave some prompts to generate background but it doesn't respect what I ask even change parameters.

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад

      Yup - a basic SDXL version is indeed there too!

    • @odev6764
      @odev6764 5 месяцев назад

      @@NerdyRodent thank you so much. I'll see that to see if it fix my issues

  • @kinleyai
    @kinleyai 7 месяцев назад

    I can't seem to get away from the background in the Load Face image. Any suggestions?

    • @NerdyRodent
      @NerdyRodent  7 месяцев назад

      Maybe try blurring the background?

  • @alcazar6000
    @alcazar6000 7 месяцев назад +2

    For me, through the manager or available within ComfyUI already (perhaps through previous node installations), there is a SAMLoader and a SAM Model Loader. Also, I find an InvertMask and Mask Invert. Unfortunately, I find nothing under a search for grounding or dino. I haven't cloned anything to this install as the manager has provided everything needed up to this point other than missing models for custom nodes. If cloning (DL a custom node outside the manager) is the case here, please provide links to those modules... or if I have this all wrong, perhaps a suggestion as to where to look for an answer to this dilemma. It appears that there are a few people with this same issue. TYIA

    • @alcazar6000
      @alcazar6000 7 месяцев назад

      BTW, your Reposer is brilliant!!! ... as are you my nerdy compatriot.

    • @alcazar6000
      @alcazar6000 7 месяцев назад +1

      Also, as a side note. Is it possible that you could also provide a snapshot of the layout, alongside the layout loading image on your GIT page. It would be quite helpful. Without this, for example, as I mentioned above, even though there are now nodes that will load the SAM, either by the node referenced in your workflow being merged or by that node being installed by a different custom node that performs the same task, and the fact that the node referenced in your workflow is no longer available (by being merged of depreciated), is shows up as blank with a red background in ComfyUI. We may be able to see input connectors and output connectors, but we cannot see what would have been the contents of that node such as parameter values or referenced files. Since this is the case, everyone with is same dilemma would be forced to search your entire video to see if they could find those details. I watched this video and I could not see what the contents of some items are, as they were never focused on... and even if they were, there may not have been enough clarity on those nodes to decipher. Providing those captures would alleviate these issues. This way, when we get a blank red missing node block, we can quickly and easily determine the parameters within that node when switched out with compatible nodes. TA

    • @TheDocPixel
      @TheDocPixel 7 месяцев назад

      I also would like a complete screen shot of all of the nodes Lay-out. Having the exact same problem with Reposer 1

    • @NerdyRodent
      @NerdyRodent  7 месяцев назад +1

      Use ComfyUI Manager to install missing custom nodes.
      Be sure to keep ComfyUI updated regularly - including all custom nodes.

    • @artifice-ltd
      @artifice-ltd 7 месяцев назад +2

      I've found that ""ComfyUI Impact Pack" also has to be installed. I think there are some missing SAM components that aren't in storyicon's "segment anything" node pack

  • @none76ui
    @none76ui 6 месяцев назад

    What an amazing tutorial. I got it working with 1.5, and tried with SDXL, but I got an error due to the IPadapter SDXL models. Have you been able to get SDXL working with this workflow?

    • @NerdyRodent
      @NerdyRodent  6 месяцев назад +1

      There is an Sdxl version on my website too, yes

    • @none76ui
      @none76ui 6 месяцев назад

      @@NerdyRodent Oops, I glossed right over that. Thank you.

    • @none76ui
      @none76ui 6 месяцев назад

      @@NerdyRodent I was hoping to get the full clothing workflow working with SDXL, but I couldn't. Also, one issue I'm facing in general is that after the clothing mask is created, the black parts of the image really want to stay black into the final generation. Don't know how to fix that.

    • @NerdyRodent
      @NerdyRodent  6 месяцев назад

      @@none76ui Lower the strength ;)

    • @none76ui
      @none76ui 6 месяцев назад

      @@NerdyRodent Okay, I got an SDXL workflow including clothing running! And lowering the strength seems to help a bit, but I found that increasing Ksampler (base) steps seems to have a much larger effect. Just curious also, what exactly are the Base and IPA Ksamplers? I can't find any documentation on them. I also can't figure out what the "Step End/Start" that links to them does. Does it override the step count option inside the nodes?

  • @RawPowerComics
    @RawPowerComics 4 месяца назад

    Hi. I'm getting a No module named 'midas.dpt_depth' error from the zoe depth map on the SDXL face and pose version of this. Any idea what's causing this or how to fix?

    • @NerdyRodent
      @NerdyRodent  4 месяца назад +1

      You may have an old version of controlnet support installed

    • @RawPowerComics
      @RawPowerComics 4 месяца назад

      @@NerdyRodent EDIT: Nevermind. I deleted an old controlnet preprocessors folder in the custom nodes folder called comfy_controlnet_preprocessors and it worked, but now I can't get the DWPose to work for some reason. The whole thing renders but it ignores the pose of the character in the pose jpg and the preview window for DWPose Estimator stays black.

  • @alexgilseg
    @alexgilseg 4 месяца назад

    your version has a controlnet for tile, the one on your website doesn't. has there been an update.. I am struggling with getting the likeness you are.. trying to find out why..

    • @NerdyRodent
      @NerdyRodent  4 месяца назад +1

      Yup, the ipadapter changed so it’s slightly updated.

    • @alexgilseg
      @alexgilseg 4 месяца назад

      @@NerdyRodent some clothing won't get detected no matter what I do. have you noticed this or does all pieces of clothing work for you all the time?

  • @user-le2zq6td1i
    @user-le2zq6td1i 7 месяцев назад +1

    Fantastic. could you give us the .json file? Do you have a membership registration or a paid account?

    • @NerdyRodent
      @NerdyRodent  7 месяцев назад +1

      Gonna have to make a Patreon thing, aren’t I? 😆

    • @user-le2zq6td1i
      @user-le2zq6td1i 7 месяцев назад +1

      @@NerdyRodent Of course. We want a better benefit to be easier to follow.

    • @githubaccount9135
      @githubaccount9135 7 месяцев назад +2

      How to load the workflow ,where shall I find .json file in order to load your workflow

  • @BadNewsBerrington
    @BadNewsBerrington 7 месяцев назад +2

    I can't get this to work now after the latest controlnet update. The DWPreprocessor node cannot be found. I've tried uninstalling controlnet and deleting all of the controlnet folders like one reddit post suggested but that didn't help. if anyone knows of a fix please help. thanks

    • @NerdyRodent
      @NerdyRodent  7 месяцев назад

      Make sure you are using the standard version of ComfyUI and that everything is up to date

    • @BadNewsBerrington
      @BadNewsBerrington 7 месяцев назад

      @@NerdyRodentI'm using the portable version which was working fine with your previous version of this workflow. I updated everything tonight and that's when this broke. Looking through the Fannovel16/comfyui_controlnet_aux github files and I see that they updated some of the DWpose files as of yesterday and an hour ago. Maybe they broke something?

    • @BadNewsBerrington
      @BadNewsBerrington 7 месяцев назад

      @@NerdyRodent Looks like it's a controlnet import issue. I'll contact the devs. thanks for the suggestions.
      0.0 seconds (IMPORT FAILED): C:\Users\Big Bane\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux

    • @NerdyRodent
      @NerdyRodent  7 месяцев назад +1

      Just checked and yes - as of an hour ago they broke _all_ their preproccesors

    • @NerdyRodent
      @NerdyRodent  7 месяцев назад +1

      They used an integer instead of a string in __init__.py - just put quotes around the 1 ("1") for MPS fallback until they fix it ;)

  • @kallekayhko1117
    @kallekayhko1117 6 месяцев назад

    This is awesome, but I can't get this to work. In 'Positive_Prompt' there's red circle. In ComfyUI I get error. I tried to reinstall all, updated all, but still have not figured out what is the problem.
    ERROR:root:Failed to validate prompt for output 158:
    ERROR:root:* CLIPTextEncode 30:
    ERROR:root: - Required input is missing: clip
    ERROR:root:Output will be ignored

    • @NerdyRodent
      @NerdyRodent  6 месяцев назад

      You’ll need to make sure you’ve installed all the required nodes before you can run the workflow. Update ComfyUI itself as well as all custom nodes. Check the troubleshooting guide at the top for a full set of steps!

  • @towakona
    @towakona 6 месяцев назад

    I pressed Queue Prompt and didn't work