SDXL 1.0 - Img2Img & Inpainting with SeargeSDXL

Поделиться
HTML-код
  • Опубликовано: 29 сен 2024

Комментарии • 142

  • @Searge-DP
    @Searge-DP Год назад +89

    That is amazing. And I'm glad to see that you enjoyed using my workflow.
    This is probably the best explanation of the workflow that I've seen so far. So if I ever forget how my workflow works I can just come back here and learn everything I need to know. You also just gained another subscriber, I need to watch some of the other great videos on your channel.

    • @NerdyRodent
      @NerdyRodent  Год назад +9

      And many thanks to you for the workflow! Lots of hard work went in there 👍

    • @Searge-DP
      @Searge-DP Год назад +10

      Yes, it's a bit of work to get everything in place. And thanks to your video I can now just send people with questions to this channel while I can focus all my limited free time on working on the next update before getting back to writing more documentation 😃

    • @NerdyRodent
      @NerdyRodent  Год назад +3

      @@Searge-DP woohoo! Updates! 😀

    • @KrAzeeCoMic
      @KrAzeeCoMic Год назад +1

      Any one know how you can add controlnet to this workflow

    • @KrAzeeCoMic
      @KrAzeeCoMic Год назад

      Any one know how you can add controlnet to this workflow@@NerdyRodent

  • @tikishark1
    @tikishark1 Год назад +7

    Here's a summary if you'd like to add it to your workflow. I just added a text box:
    1. Subject Focus: Prioritize the main details of your input to closely align the result with your vision. For instance, if you describe a vibrant sunset over the ocean, the image will highlight the sun's colors and the water's reflection.
    2. Style Focus: Use this option to ensure the image embodies a particular artistic approach or style. If you mention a desire for a Renaissance painting vibe, the generated image will mimic the brushwork and color palette of that era.
    3. Weighted: Achieve a balance between subject and style. If you want a balance between a serene forest scene (subject) and impressionist brushstrokes (style), you can fine-tune the settings to get the desired mix.
    4. Overlay: This mode allows for unexpected but pleasantly surprising results. If you ask for a fusion of a cityscape and a dreamy cloudscape, the output might creatively merge elements from both.
    5. Weighted-Overlay and Overlay-Weighted: Combine the strengths of both methods for even more precise control over the image output. When combining a subject like a mountain with a minimalist style, you can tweak the balance to get a unique interpretation.
    6. Style Only: Generate outputs solely focused on the stylistic approach. If you want a picture that captures the essence of impressionism, the image will exhibit the distinct brushwork and color play of that style.
    7. Subject - Style and Style - Subject: These modes emphasize one aspect while downplaying the other, leading to intriguing outcomes. For a subject like a bouquet of flowers with a modern art twist, the image could either prioritize the subject's details or infuse them with modernist aesthetics.

  • @ysy69
    @ysy69 Год назад +6

    Thank you for this tutorial. I've been waiting to invest myself into Comfy but what was holding me back was the in-paint. Thanks to you, I have no more excuses but to dive in.🐀

  • @JSON_bourne
    @JSON_bourne Год назад +1

    The easiest way to get the work flow is to drag and drop an example image into the browser window. The workflow will instantly. Then open the manager and click install missing nodes. EASY PEASY

  • @KammaKhazi
    @KammaKhazi Год назад +7

    i love the absolutely bonkers node structure (it’s like the inside of an engine). I can’t wait to see one with multiple loras and controlnet nodes and adetailer, sdupscale etc 😂

  • @JDRos
    @JDRos Год назад

    SeargeDP 4.0 doesn't have the disconnected upscale node. Now I'm lost as to why it doesn't generate upscaled image.

    • @NerdyRodent
      @NerdyRodent  Год назад

      You can still use the hires fix for larger images in 4.0

  • @MegaGovert
    @MegaGovert 27 дней назад

    Thank you, so glad i stumbled across your chanel, and in partcular this workflow, so neat and easy to use for us that lost our meatballs in the spaghetti.

  • @kallamamran
    @kallamamran Год назад

    Very easy to work with... 🤣 Or maybe not
    "Lots of prompting options high res fix upscaling all in a single compact interface" ..... Uhm... That's Automatic1111 right?

  • @sakolsangsuk7578
    @sakolsangsuk7578 Год назад +1

    I try to .....
    but This....
    Prompt outputs failed validation
    ImageUpscaleWithModel:
    - Required input is missing: image
    ImageUpscaleWithModel:
    - Required input is missing: image
    SeargeInput4:
    - Value not in list: base_model: 'sd_xl_base_1.0.safetensors' not in ['beautifulRealistic_v60.safetensors', 'dream2reality_v10.safetensors', 'epicrealism_pureEvolutionV5.safetensors', 'majicmixRealistic_betterV2V25.safetensors', 'sdxl10ArienmixxlAsian_v10.safetensors']
    - Value not in list: refiner_model: 'sd_xl_refiner_1.0.safetensors' not in ['beautifulRealistic_v60.safetensors', 'dream2reality_v10.safetensors', 'epicrealism_pureEvolutionV5.safetensors', 'majicmixRealistic_betterV2V25.safetensors', 'sdxl10ArienmixxlAsian_v10.safetensors']
    - Value not in list: vae_model: 'sdxl-vae.safetensors' not in []
    - Value not in list: main_upscale_model: '4x_NMKD-Siax_200k.pth' not in []
    - Value not in list: support_upscale_model: '4x-UltraSharp.pth' not in []
    - Value not in list: lora_model: 'sd_xl_offset_example-lora_1.0.safetensors' not in ['DetailedEyes_xl_V2.safetensors', 'JapaneseDollLikeness_v15.safetensors', 'ThaiMassageDressV2.safetensors', 'asianGirlsFace_v1.safetensors', 'ellewuu-V1.safetensors', 'flat2.safetensors', 'handmix101.safetensors', 'koreanDollLikeness.safetensors', 'mahalaiuniform-000001.safetensors', 'sabai6.safetensors', 'taiwanDollLikeness_v20.safetensors']

    • @NerdyRodent
      @NerdyRodent  Год назад +2

      Looks like you forgot the bit where you download all the models 😉

  • @Anubisx
    @Anubisx 11 месяцев назад

    When using Image-Image or Inpainting. The finished image has the same pixel size but the ratio has changed and the saved images have sections cut off. Anyone know what I'm missing?

  • @BrandosLounge
    @BrandosLounge 11 месяцев назад

    How do you work the upscaler? if i connect the upscaler, it's creating images that are 1024x1024

  • @mikealbert728
    @mikealbert728 Год назад +1

    Cool. SDXL Controlnets are out now. Can you cover installation and use with Comfyui? Thanks

  • @tikishark1
    @tikishark1 Год назад +2

    Excellent demonstration. I’ve only been using a small fraction of this workflow because I was afraid to touch any of the settings. Now I’m excited to press all the buttons. Ty

  • @Remianr
    @Remianr Год назад +2

    Leeet's gooooo! I've been waiting for this kind of workflow for ComfyUI since SDXL 1.0 released :D I can't resist to say it, I freaking love open source

  • @APOLOVAILS
    @APOLOVAILS 10 месяцев назад +1

    brillant and very compact tutorial ! not spaghetti, right to the point ! 🙏🙏

  • @___x__x_r___xa__x_____f______
    @___x__x_r___xa__x_____f______ 9 месяцев назад

    How do I add open pose controlnet ?? I tried custom and it thinks it’s just an image. Need preprocess or?

  • @RedRojo210
    @RedRojo210 Год назад +2

    Finally! I was searching and searching for something on inpainting and how to do it with comfy. With this workflow and your explanation, it all seems so much easier now.

  • @betortas33
    @betortas33 10 месяцев назад

    Hey Nerdy Rodent, thx for the tutorial, do you plan on doing a new video for the newest version?

  • @Sbill92085
    @Sbill92085 Год назад +1

    This is awesome, already got it downloaded and playing around. Bravo! Very well done my friend :)

  • @pogiman
    @pogiman Год назад

    hi what does
    Error occurred when executing SeargeImageSave:
    'Namespace' object has no attribute 'disable_metadata'
    File "D:\AI\Comfyui\ComfyUI_windows_portable\ComfyUI\execution.py", line 144, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
    File "D:\AI\Comfyui\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
    File "D:\AI\Comfyui\ComfyUI_windows_portable\ComfyUI\execution.py", line 67, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
    File "D:\AI\Comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\SeargeSDXL\searge_sdxl_sampler_node.py", line 361, in save_images
    if args.disable_metadata is None or not args.disable_metadata:
    mean?

    • @NerdyRodent
      @NerdyRodent  Год назад

      Could be you're using a really old version or something?

  • @2PeteShakur
    @2PeteShakur Год назад +1

    brilliant stuff, very comprehensive and straight to the point, no fluff as always nerdyrodent! oh, and don't forget to save image node instead of preview if you want them saved by default! ;)

  • @Maxime_motion
    @Maxime_motion 11 месяцев назад

    thanks for that it help me alot, DO you have a V4 update

  • @YoghurtKiss
    @YoghurtKiss 4 месяца назад

    My only question; I run Comfy as an Integration to Krita. Meaning I never see these workflow windows etc. Can I still make changes to these files/workflows/background things? Or are things like this video presents just something for me to forget about?

    • @NerdyRodent
      @NerdyRodent  4 месяца назад

      If you’re just using Krita then you can forget about comfy… unless you want to start making your own workflows, of course!

  • @stablefaker
    @stablefaker Год назад +1

    Very excited to see proper inpainting in comfyUI

  • @djsidemon5557
    @djsidemon5557 Год назад

    Waiting for image to video on ComfyUi

  • @PZMaTTy
    @PZMaTTy Год назад

    I'm getting some weird and creepy results with denoising at 0.666 like you

  • @ateafan
    @ateafan Год назад

    thanks for the tutorial it helped me a lot! Now!... lets hit queue picture and...... **GIF OF NUCLEAR EXPLOSION** mah GPU!!!! >_

  • @pon1
    @pon1 Год назад

    I got "fatal: refusing to merge unrelated histories", but I have some other custom node installed as well, maybe they are interferring.

    • @pon1
      @pon1 Год назад

      Just pasting the folder to customnodes worked fine though.

  • @makulVR
    @makulVR Год назад +1

    Wow thanks! How can I use two custom LoRAs with this workflow?

    • @NerdyRodent
      @NerdyRodent  Год назад +1

      I’ve not used multiple loras myself as yet, but you could try connecting your extra Lora to the checkpoint then using ModelMergeSimple

  • @LuisManuelLealDias
    @LuisManuelLealDias Год назад

    I tried to change vae_model and it went "undefined". Same with others. What happened?

  • @Anubisx
    @Anubisx 11 месяцев назад

    Amazing Tutorial and Amazing Workflow, keep up the great work. Subbed

  • @relaxation_ambience
    @relaxation_ambience Год назад

    @NerdyRodent Hi. When you inpainted with mask, did Comfyui generate all picture again, or only masked area ? This is critical moment for me to start using Comfyui as until recently it generated whole image and if you work with large size images it's impossible to work normally.

  • @c0nsumption
    @c0nsumption Год назад

    A couple hours into using it.... its genuinely something. At first I thought it was a gimmick but with all the prompt layers it allows you to force the outputs in the direction you need them. I'll make an updated comment in a couple days.

  • @aintelligence-ef2yd
    @aintelligence-ef2yd Год назад

    As an Auto1111 user I think ComfyUI will completely fill my needs ! I will change it ASAP thank you for tutorials !

  • @SyntheticVoices
    @SyntheticVoices Год назад +1

    Useful as always. Need to get ComfyUI booted up.

  • @MarkusGrand
    @MarkusGrand Год назад

    Any reason as to why I cant achieve the kind of high accuracy on text and writing claimed by StabilityAI?

  • @MultiFlashone
    @MultiFlashone Год назад +1

    Fabulous post as always! Thank you so much for this!!!

  • @erdbeerbus
    @erdbeerbus Год назад

    great! did u find out a way to "reproduce" an img sequence in this way? //like a stack? grz

  • @ryangonzales7716
    @ryangonzales7716 Год назад

    Holy crap the entire thing looks like a freaking pc motherboard circuit wtf😱😱

    • @NerdyRodent
      @NerdyRodent  Год назад +1

      Cool, isn’t it! But can we make Minecraft in ComfyUI? 😉

    • @ryangonzales7716
      @ryangonzales7716 Год назад

      @@NerdyRodent Do a crazy video attempting it! 😆 BTW can you please do a quick walkthrough vid on integrating the new ControlNet with the SeargeSDXL comfyui workflow? Pretty pleeaasseee😘

  • @drawmaster77
    @drawmaster77 Год назад +1

    very cool, thank you Nerdy!

  • @cgdtb
    @cgdtb 6 месяцев назад

    This is amazing!

  • @contrarian8870
    @contrarian8870 11 месяцев назад

    @Nerdy Rodent Thanks for this tutorial, very useful. I'm just getting into Comfy and I use Searge as a starting point. There's a setup I don't know how to do in Searge. I want to do img2img, but with a depth map as an extra input, along with the main image. Basically: img2img + depth2img. The latest Searge (4.3) has ControlNets, but I don't want to make a depth map in a ControlNet, I want to supply my own. There's a second image input, but it's meant for "masks" for inpainting, not really depth maps. Maybe there's a way to "inject" my depth map into a depth ControlNet, so that the Controlnet uses an external depth map instead of making one, but I don't know how to do this. If you have any ideas (or alternative Comfy setups that can do this), please let me know. Thanks again.

    • @NerdyRodent
      @NerdyRodent  11 месяцев назад

      Yup, just bypass the preprocessing when using preprocessed images

  • @Because_Reasons
    @Because_Reasons Год назад +1

    This is fantastic, thank you.

  • @buddy2665
    @buddy2665 Год назад

    Do you get syntax error it comfyUi?

  • @Waiderlynx
    @Waiderlynx Год назад

    Thank you for the great video! What is the minimal required VRAM for SDXL with ComfyUI?

    • @NerdyRodent
      @NerdyRodent  Год назад

      Pretty low. Even ancient 6GB cards should be ok

  • @johnmcaleer6917
    @johnmcaleer6917 Год назад +1

    I've been using this workflow for a little while and had my head around most of it, thankfully I've learned the high res fix properly now, thanks heaps for your vids..and to @Searge-DP for the wonderful work..

  • @Sbill92085
    @Sbill92085 Год назад

    is there a way to easily add more lora models?

    • @NerdyRodent
      @NerdyRodent  Год назад +1

      There is a lora stacker node available if you want loads of loras!

  • @23pinkdots
    @23pinkdots Год назад

    sorry for the basic question but... how do you graduatly zoom in and out (my mouse scrolling is way too fats)?

    • @NerdyRodent
      @NerdyRodent  Год назад

      I just use my scroll wheel

    • @23pinkdots
      @23pinkdots Год назад

      @@NerdyRodent this is what I do but the minimal change of zoom is huge. Very far or super close. do you have any idea of how to refine its steps?

    • @NerdyRodent
      @NerdyRodent  Год назад

      imwheel is the best option as you can configure per-application scroll wheel settings or changes when a modifier key is pressed

  • @wakegary
    @wakegary Год назад +1

    my man

  • @jasondulin7376
    @jasondulin7376 Год назад

    Thank you so much for this wonderful tutorial. If I might ask, Can this be used with other models? If we switch models are their other considerations? I get errors that I don't yet understand. Thanks!

    • @NerdyRodent
      @NerdyRodent  Год назад +1

      Yup, it can indeed be used with other models :)

  • @spiritpower3047
    @spiritpower3047 Год назад

    How can we merge/blend two or three pictures for make a very new one ? (like Midjourney blend function)

    • @NerdyRodent
      @NerdyRodent  Год назад +1

      Use revisions - ruclips.net/video/uK51kvxFkhc/видео.html

  • @Dingle.Donger
    @Dingle.Donger 11 месяцев назад

    I love this. I'm so glad I found this. Thanks for making this video and thanks to Searge for making this node setup. I'm on 4.3 right now and it's even cleaner than the version used in the video.

  • @hfoxhaxfox1841
    @hfoxhaxfox1841 Год назад

    Are there any optimizations to make generations faster? It takes me 90-150 seconds per image

    • @NerdyRodent
      @NerdyRodent  Год назад

      Mostly GPU. It's about 7 seconds for me.

  • @MiraPloy
    @MiraPloy Год назад

    Someone explain the difference between the main and secondary prompt, what to put in each, and the technical explanation for why they are separate.

    • @NerdyRodent
      @NerdyRodent  Год назад

      Basically, go with natural language in G and the usual word soup in L. Unfortunately the paper has little as to why, but you can check it out at arxiv.org/abs/2307.01952

  • @farslght
    @farslght Год назад

    This is so good! Besides in gives me an error and i can`t use it. Now i have to become python programmer at first.

    • @NerdyRodent
      @NerdyRodent  Год назад +1

      Lol, no programming required, thank goodness, as we can just use the app and pre-made workflows! 😃

  • @ManishKumar-885
    @ManishKumar-885 Год назад

    make video on
    CONTROL NET IN COMFY UI PLEASE

    • @NerdyRodent
      @NerdyRodent  Год назад

      So far as I know there’s only one available at the moment 😕

  • @adamschroeder3568
    @adamschroeder3568 Год назад

    After applying the mask it still recreates the entire image instead of just using my masked areas. Are there any additional steps needed to make sure the mask is used?

    • @NerdyRodent
      @NerdyRodent  Год назад

      Not had that happen myself, so maybe confirm the height & width?

  • @rifz42
    @rifz42 Год назад

    thanks! this is great!! is there a way to see a progress bar?

    • @NerdyRodent
      @NerdyRodent  Год назад

      The two easiest ways are to view the terminal output progress bar, or zoom out and watch the progress through the nodes

  • @erazorDev
    @erazorDev Год назад

    Weird, image to image always produces an exact copy of the source image for me.

    • @NerdyRodent
      @NerdyRodent  Год назад +1

      Try with denoise 0.9 😉

    • @erazorDev
      @erazorDev Год назад

      @@NerdyRodent Thanks. I mean it works but its the last parameter I would have suspected to fix it.

  • @amj2048
    @amj2048 Год назад +1

    Very cool!

  • @ickorling7328
    @ickorling7328 Год назад +1

    My thanks 🎉

  • @pragmaticcrystal
    @pragmaticcrystal Год назад +1

    👍

  • @LIMBICNATIONARTIST
    @LIMBICNATIONARTIST Год назад

    New!

  •  Год назад

    Is there any way to change the output directory in ComfyUI?

    • @NerdyRodent
      @NerdyRodent  Год назад +1

      Yup, that’s below the user interface area though

    •  Год назад

      @@NerdyRodent Thanks, but where exactly? I can not find it anywhere.

    • @NerdyRodent
      @NerdyRodent  Год назад +2

      @ In the "save image" nodes

    •  Год назад

      @@NerdyRodent Thanks!

  • @musicandhappinessbyjo795
    @musicandhappinessbyjo795 Год назад

    I have issues when I don't connect the upscale nodes together

    • @NerdyRodent
      @NerdyRodent  Год назад

      3.4 is working for me 😉

    • @musicandhappinessbyjo795
      @musicandhappinessbyjo795 Год назад

      @@NerdyRodent Yes I saw that . I can use it no problem. But I have deactivate the upscaler nodes. To actually run this.

    • @musicandhappinessbyjo795
      @musicandhappinessbyjo795 Год назад

      @@NerdyRodent ok now it's working. I had to reinstall the comfyUI to get this working.

  • @douchymcdouche169
    @douchymcdouche169 Год назад

    Pretty ironic how ComyUI is called that. Anything but "comfy" these days.

  • @johnpope1473
    @johnpope1473 Год назад

    I was in love with this ui (for about 10 seconds) when I first loaded it. But the amount of mouse work and fighting with scrolls and zooms - too gamey. I don’t game.

  • @Feelix420
    @Feelix420 Год назад +4

    sdxl is just SD 2.1 trained on 1024 pixels

  • @MondoMurderface
    @MondoMurderface Год назад +1

    ComfyUI hurts my eyes and SDXL isn't actually that good.

  • @Retardul
    @Retardul Год назад

    How are you doing the drag and drop trick from preview to load image?