Expanding Horizons: Outpainting Mastery in ComfyUI

Поделиться
HTML-код
  • Опубликовано: 25 окт 2024

Комментарии • 33

  • @GrocksterRox
    @GrocksterRox  9 месяцев назад +2

    Hi everyone - thank you for the amazing feedback. I diagnosed and found that my version of IP Adapter was old, and I've uploaded a new working version using the latest version to the civ site (linked in the video description). Thank you again so much for your partnership and patience.

  • @LaminarRainbow
    @LaminarRainbow 9 месяцев назад +1

    Thank you so much for sharing and producing this workflow.
    I learned a lot from your instruction and the workflow itself.
    Thank you
    🙏

    • @GrocksterRox
      @GrocksterRox  9 месяцев назад

      You're very welcome! I'm glad it had a positive impact. :)

  • @kpr2
    @kpr2 9 месяцев назад +2

    Very nice. Thanks again for the terrific tips/tricks!!!

  • @DerekShenk
    @DerekShenk 9 месяцев назад +1

    Brilliant workflow with amazing results! Two Questions - 1) when I create a good image, I save it as a png. Usually I can drag and drop a png back into Comfyui and all of the settings get loaded. But when I drag a png file made with this flow, the settings don't appear to load properly as the image cannot be reproduced. In particular, the seed seems to change. Each time I load the same png file, the seed is different. How do I bring an image back into this workflow with the exact settings that created that image?
    2) I downloaded your workflow today, but it does not contain the node "Clip Vision Encode" in the quick outpaint section and I get ipadapter errors. Can't see how that node connects but also as others have reported problems and none of their solutions have helped me, might be good to check the workflow to ensure all the nodes are working properly. I've spent many hours trying to get this to work. Suggestions on how to connect the "Clip Vision" node?
    Thanks for any help.

    • @GrocksterRox
      @GrocksterRox  9 месяцев назад

      That's a really great question! I would recommend two separate routes. The guaranteed route is to save the workflow as a JSON file (using the save button) because that will save every single parameter that you've configured. From the saving workflow in image route, I would make sure that all your samplers (and any other nodes that use a seed, such as Face Detailer) has the seed fixed with the drop-down saying "Fixed". That should hopefully keep the seed locked (but I'd recommend option #1 to be 100% sure). Hope that's helpful and wishes for continued success to you!

    • @DerekShenk
      @DerekShenk 9 месяцев назад

      @@GrocksterRox Any thoughts on question #2?

    • @GrocksterRox
      @GrocksterRox  9 месяцев назад

      Yup, I noticed after I recorded the video that this node doesn't exist in the latest version of the custom node package. I updated the workflow which can be downloaded via the link in the video, so you will not need to use that node now. Just make sure to update your IP adapter custom node in your comfy manager.

    • @DerekShenk
      @DerekShenk 9 месяцев назад +1

      Thank you for your responses. Very helpful. Thanks for updating the workflow. @@GrocksterRox

    • @GrocksterRox
      @GrocksterRox  7 месяцев назад

      Of course, no prob

  • @zerohcrows
    @zerohcrows 8 месяцев назад +1

    i really love your workflow but having issues using SD1.5 models with it, keep getting a too much memory error which is weird as i dont get that with sdxl models. any fix for this?

    • @GrocksterRox
      @GrocksterRox  8 месяцев назад

      Hi there, I'm so glad that you like the workflow! The memory error is a little hard to diagnose unless I have a little bit more information. Can you tell me which node it seems to be failing on, is it possibly Clipseg?

  • @sportsupyours
    @sportsupyours 8 месяцев назад +1

    Not sure what I'm doing wrong, but when using basic outpainting the padded space doesn't fully denoise into anything - it just generates mostly white space. Banding along the edges is terrible also even with changing up the feathering. All settings are the same as the imported workflow, and I'm pretty sure I'm not missing any models. I wish I could get this working, looks amazing.

    • @GrocksterRox
      @GrocksterRox  8 месяцев назад +1

      It's amazing when it works, you probably have a missing connection or possibly the Use Everywhere isn't set up quite right. If interested, you can book 1 on 1 time and we can go through it together (among additional learning topics if interested): cal.com/grock/training

  • @HisWorkman
    @HisWorkman 9 месяцев назад +1

    Thank You, I am getting an error from ComfyUI “the following node types were not found IPAdapterCLIPVisionEncode” this node on the sheet was red and disabled “CLIP Vision Encode (IPAdapter)” any help would be greatly appreciated :)

    • @GrocksterRox
      @GrocksterRox  9 месяцев назад +1

      Hi there - someone else had that issue as well earlier and I believe they were missing the IP Adapter Plus node. Can you confirm you've installed it and also made sure that it and your Comfy version are up to date?

    • @AlexanderDuttonButton
      @AlexanderDuttonButton 9 месяцев назад

      @@GrocksterRoxI have IPAdapterPlus but the same issue -- maybe a version naming problem. Pretty sure I switched mine with 'ClipVisionEncode' and it worked great.

    • @parlabaneisback
      @parlabaneisback 9 месяцев назад

      What worked for me was replacing the 'Apply IPAdapter ' node with a new one (Add Node / ipadapter / Apply IPAdapter).
      The new one had a 'clip_vision' input rather than a 'clip_vision-output' one, so I didn't need to use the 'IPAdapterCLIPVisionEncode' to get it to work.
      (The 'IPAdapterCLIPVisionEncode' and the old 'Apply IPAdapter ' nodes had to be removed, rather than just disconnected, to stop me getting a whitespace error.)

    • @GrocksterRox
      @GrocksterRox  9 месяцев назад +1

      Thank you for the amazing feedback. I diagnosed and found that my version of IP Adapter was old, and I've uploaded a new working version using the latest version to the civ site (linked in the video description). Thank you again so much for your partnership and patience.

  • @97BuckeyeGuy
    @97BuckeyeGuy 9 месяцев назад +2

    I thought TGHQFace was an upscaler model? How did you get it into a lora?

    • @GrocksterRox
      @GrocksterRox  9 месяцев назад

      Very good observation! I had this in place for so long without realizing it, but in going back and testing it, you are 100% right (it was in the LORA folder so it was picked up in my LORA options but it is not a true LORA and has no effect even with different weights. I'll make an update in the description (and I moved it over to my upscaler folder). Thank you again so much!

    • @97BuckeyeGuy
      @97BuckeyeGuy 9 месяцев назад +1

      @@GrocksterRox Well, that's a shame. I was hoping you'd found a new toy for me to play with. Thank you for your work. It's greatly appreciated.

  • @AlexanderDuttonButton
    @AlexanderDuttonButton 9 месяцев назад +1

    This was very helpful, thanks. I was getting unsatisfying results with the outpainting methods. I think I was just pushing too much IPAdapter.

    • @GrocksterRox
      @GrocksterRox  9 месяцев назад

      Glad it helped! I was also frustrated for a long time until I simplified things and the scenes started to click.

  • @NikolasMacmillan
    @NikolasMacmillan 9 месяцев назад +2

    Thanks for all the amazing content Grockster!! One quick question, in the Quick/Basic outpainting, what is the Apply IPAdapter node providing? It looks like it's receiving the clip vision but isn't outputting anything.

    • @GrocksterRox
      @GrocksterRox  9 месяцев назад +2

      Yes, exactly, thank you for catching that! (You need to connect up the model output from the Apply IP Adapter to the KSampler (the one you're using for outpainting). It definitely helps in reducing banding (probably why I noticed the larger than usual banding during the video - thought I had hooked up properly). To see a comparison of the weight and it's end effect, I created a crude comparison image here - civitai.com/images/5172504

  • @rule7254
    @rule7254 8 месяцев назад +1

    "Input and output must have the same number of spatial dimensions" (KSampler Efficient) errors are fun! Anyone have any clue?

    • @GrocksterRox
      @GrocksterRox  8 месяцев назад

      Which aspect of the flow is this happening at? It's a bit hard to diagnose without seeing the context of the error.

    • @rule7254
      @rule7254 8 месяцев назад +1

      @@GrocksterRox I was able to fix this issue by using `Set Latent Noise Mask` paired with a `VAE Encode` in-between the IPAdapter and KSampler, instead of the one for inpainting. Something about the inpainting one caused an in/out sizing mismatch regardless of fiddling around or pre-processing the input image --- I wish we could upload screenshots here!
      (Thanks for the quick reply though!)

    • @GrocksterRox
      @GrocksterRox  8 месяцев назад

      @@rule7254 I'm so glad that you were able to resolve the issue! Congrats.