Seamless Outpainting with Flux in ComfyUI (Workflow Included)

Поделиться
HTML-код
  • Опубликовано: 10 дек 2024

Комментарии • 83

  • @Atreyuwu
    @Atreyuwu 21 день назад

    You got a sub out of me!
    This is an excellent outpainting workflow - one of the better ones I've seen.
    Might be a little tricky to setup for the amateur/novice (might want to put a little warning for people) - but I've been working with ComfyUI for well over a year now, and it only took a couple minutes of directing it to the proper models - I also had to change my ComfyUI/temp folder permissions as I'm running on WSL2 over network.

  • @CasasYLaPistola
    @CasasYLaPistola 2 месяца назад +7

    Thanks for the video and the workflow, I've used it and everything works fine until it reaches the Flux group, and the problem is that I don't know in which directory I should copy the flux model, because if I understood correctly, the flux models don't go in the same directory as the 1.5 and SDXL checkpoints. In short, when I copy them there the workflow gives me an error. Can you tell me where to put it? Also, until now to use flux models I didn't use the usual "load checkpoint" node, instead I used the "dualclip loader" node.

    • @baheth3elmy16
      @baheth3elmy16 2 месяца назад

      I loaded a Diffusion Model node and used the flux model from there but I got an error with the VAE Encoder. The model to use is the 17gb one not the diffusion model of 11gb referred to here.

    • @user-fo9ce3hr5h
      @user-fo9ce3hr5h 2 месяца назад

      @@baheth3elmy16 bro what is the clip file for download? i haven't clip file for flux1-dev-fp8.safetensors.

    • @wiwwiw2890
      @wiwwiw2890 2 месяца назад

      Мне тоже интересно

  • @discotek1198
    @discotek1198 Месяц назад +1

    the best workflow i have seen yet. Perfect, thank you very much!!! it works floawlesly!!! SUB+

  • @cabinator1
    @cabinator1 Месяц назад

    Fantastic Workflow. Works flawless for me.

    • @my-ai-force
      @my-ai-force  Месяц назад

      Great to hear!

    • @M8cool
      @M8cool 9 дней назад

      out of curiosity, what graphics card and how much VRAM do you use to run this workflow?

  • @abaj006
    @abaj006 Месяц назад

    Brilliant work, really amazing. Thanks for the tutorial and sharing the workflow. Just tried it and works really well.

  • @TailspinMedia
    @TailspinMedia Месяц назад

    very cool and love how organized it is.

  • @wellshotproductions6541
    @wellshotproductions6541 2 месяца назад

    Awesome workflow and great video! Found it over on OpenArt, then made my way here! Keep it up. Subscribed!

  • @Macieks300
    @Macieks300 2 месяца назад +1

    Thanks so much for this workflow.

  • @gardentv7833
    @gardentv7833 2 месяца назад

    after many models re-downloads, its work, thank you, 2 days to figure out

    • @ritikagrawal8454
      @ritikagrawal8454 2 месяца назад

      I was able to download all nodes and models, but my comfyui is not loading only. Did you face similar issue, if not, still can you please tell what worked for you?

  • @philippeheritier9364
    @philippeheritier9364 2 месяца назад

    It works very very well, a very big thank you for this brilliant tutorial

  • @dameguy_90
    @dameguy_90 2 месяца назад

    You are a genius. My subscription is worth it.

    • @my-ai-force
      @my-ai-force  2 месяца назад

      Thanks a ton for your support.

  • @97BuckeyeGuy
    @97BuckeyeGuy 2 месяца назад

    Great workflow! Thank you

  • @WasamiKirua
    @WasamiKirua 2 месяца назад

    thank you very much, great workflow

  • @happyme7055
    @happyme7055 2 месяца назад

    Stunning!!!!! First working Outpaint ever ;-) GJ! 2 things would be usefu, i guess... a negative prompt and an optional lineart controlnet implemenation...

    • @kasoleg
      @kasoleg 2 месяца назад

      I've also been looking for a long time for something that actually works. I've finalized it and posted my version on Google Drive above. Take it if you want.

  • @GrocksterRox
    @GrocksterRox 2 месяца назад

    Very well thought out. Kudos!

    • @my-ai-force
      @my-ai-force  2 месяца назад +1

      Thanks for your kind words.

  • @mcdigitalargentina
    @mcdigitalargentina 2 месяца назад +1

    Amigo gran trabajo! Suscripto a tu canal. Gracias por compartir tu trabajo.

  • @aidanblah9646
    @aidanblah9646 Месяц назад +1

    Flux_Repaint - Load Checkpoint "CheckpointLoaderSimpleERROR: Could not detect model type of: E:\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\checkpoints\FLUX\flux1-dev-fp8.safetensors". All my other workflows can find the Dev-fp8 in the unet folder. But I went ahead and copied it and put it in the Checkpoint folder, in a FLUX/ folder like you have it. I even selected it the "Load checkpoint" node. I still get that error. Please help

    • @my-ai-force
      @my-ai-force  Месяц назад

      Maybe the file has been corrupted. Try redownload it.

  • @Lord5oth
    @Lord5oth Месяц назад +2

    Cool ! this bricked my comfy build thanks!!

    • @Atreyuwu
      @Atreyuwu 22 дня назад

      It's up to you to check all necessary custom_nodes and make sure nothing clashes with anything else you might have; NOT the author of the workflow!
      Take some responsibility for yourself. It's working fine here!

  • @AlexMihaiC
    @AlexMihaiC Месяц назад

    I get this error and I don´t see were to put the clip for the flux model:
    VAEEncode
    'NoneType' object is not subscriptable

    • @AlexMihaiC
      @AlexMihaiC Месяц назад

      I happens if I don´t activate de Restore Detail part, now it works

  • @Henry-xs
    @Henry-xs Месяц назад

    This is really nice, but could you please tell me if in the third panel “Get_pad_mask”, “Get_sdxl_img”, “Get_pad_img ”, can I import these three images directly and replace them?

  • @Ekkivok
    @Ekkivok 2 месяца назад +1

    This workflow is great but there is a problem..... the node for the prompts is set with Florence wich is doing an automatisation of the prompt without any control on it,
    For exemple, i have problem for a photo wich shows human but i want the latent empty side of the image that i want to outpaint generate the background with no humans...
    And here is the problem Florence describe the entire image including...Humans on it and then out paint the image with humans (the suff that i don't want....) then....
    My question is that one, is there a workflow or can you make one that can restore the control of the prompt without florence ?

    • @my-ai-force
      @my-ai-force  Месяц назад +1

      I think you're referring to the Flux ControlNet Upscaler workflow! To address your issue, just use a text node to connect to the 'CLIP Text Encoder' instead of the 'Florence2Run.'

  • @clflover
    @clflover Месяц назад

    thank you. The flux section produces a picture than what is in the Restore detail section.. can you help?

    • @my-ai-force
      @my-ai-force  Месяц назад

      Great question! I can see where the confusion might be. The idea behind flux image-to-image repainting is to enhance and diversify the output, so it does make sense to expect some differences compared to what SDXL generates on its own. The goal is to optimize SDXL by leveraging these differences to create even more unique and creative results. If you have any specific examples or ideas in mind, I'd love to discuss them further!

  • @baheth3elmy16
    @baheth3elmy16 2 месяца назад +2

    (SOLVED) Hi. Your Workflow has a problem with the Flux group number 4. The VAE Encoder returns an error "'NoneType' object is not subscriptable" I used both 17GB flux and 11GB flux. Can you please tell us what the problem might be?
    Edit: Problem solved: The problem was that I disabled the optional groups because I thought I would save VRAM. When I enabled them, the workflow worked.

    • @mohammadbaranteem3487
      @mohammadbaranteem3487 2 месяца назад

      hello my friend I am an Iranian and I do not have enough control. My problem is that it only works until the flux stage. You managed to solve the problem and I don't understand your advice. Can you explain with a photo?

    • @baheth3elmy16
      @baheth3elmy16 2 месяца назад

      @@mohammadbaranteem3487 The workflow is divided into four groups. Groups 2 and 4 are optional and usually you can disable them. But if you will use Group 3 which is the Flux enhancement over the SDXL then you must enable group 2. Otherwise, the Flux group and entire workflow won't work.

  • @lukehancockvideo
    @lukehancockvideo 2 месяца назад +1

    Where do the images output to? They are not appearing in my ComfyUI Output folder.

    • @my-ai-force
      @my-ai-force  2 месяца назад +1

      You can replace the ‘Preview Image’ node with a ‘Save Image’ node and the image will be saved.

  • @ChrissyAiven
    @ChrissyAiven Месяц назад

    Sizes are not really big, is it possible to use higher ratios like 1080x1920 for Reels?

    • @my-ai-force
      @my-ai-force  Месяц назад +1

      We can use supir or Topaz for upscaling

  • @莊惠雯-t5g
    @莊惠雯-t5g Месяц назад

    Thanks you.
    I use all of your workflow and model
    but why my comfy ui always show:
    CheckpointLoaderSimple
    ERROR: Could not detect model type of: D:\ComfyUI-aki-v1.3\ComfyUI-aki-v1.3\models\checkpoints\flux-dev\flux1-dev-fp8-e5m2.safetensors

    • @my-ai-force
      @my-ai-force  Месяц назад

      Instead of Checkpoint Loader node, try using Load Diffusion Model to load Flux model.

  •  2 месяца назад

    This is wonderfully good work, thank you for sharing! One question, I can only guess where to place the initial image with the x y parameters. Is there a better way to do this? Anyway, great!

    • @kasoleg
      @kasoleg 2 месяца назад

      yes, I had to tinker with the settings to understand how to add. but in about 30 minutes you'll figure it out by trial and error...)

  • @วรายุทธชะชํา
    @วรายุทธชะชํา 2 месяца назад +1

    I want to generate multiple sizes in one round. How can I do that? Sir

  • @johannesmuller7881
    @johannesmuller7881 2 месяца назад

    Thanks alot for ur work, but I got one general Question does it make sense to use Comfy Ui with flux on my GTX 1070?
    right now I am downloading all stuff an just wanna set it up running but is it worth it?

    • @my-ai-force
      @my-ai-force  Месяц назад +1

      You might want to give the GGUF version of the Flux model a try!

  • @deonix95
    @deonix95 2 месяца назад +2

    Error occurred when executing CheckpointLoaderSimple:
    ERROR: Could not detect model type of: D:\Programs\SD\models/Stable-diffusion\flux1-dev-fp8.safetensors
    File "D:\Programs\ComfyUI_windows_portable\ComfyUI\execution.py", line 317, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\Programs\ComfyUI_windows_portable\ComfyUI\execution.py", line 192, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\Programs\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
    File "D:\Programs\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\Programs\ComfyUI_windows_portable\ComfyUI
    odes.py", line 539, in load_checkpoint
    out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings"))
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\Programs\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 527, in load_checkpoint_guess_config
    raise RuntimeError("ERROR: Could not detect model type of: {}".format(ckpt_path))

    • @henroc481
      @henroc481 2 месяца назад +3

      same here

    • @Cluster5020
      @Cluster5020 2 месяца назад +1

      @@henroc481 "aderek Flux v2" worked for me

  • @manipayami294
    @manipayami294 2 месяца назад

    When I use GGUF loader the app crashes. Does anyone know how should I fix this problem?

  • @AmateurDrummerBG
    @AmateurDrummerBG 2 месяца назад

    Hey, the workflow is very cool but when I get to the FLUX part, specifically the KSampler, it gets reaaally slow to render. I'm using RTX 3060 with 12 Gb VRAM. Does abyone know how to speed it up?

    • @my-ai-force
      @my-ai-force  Месяц назад

      Consider trying out Flux GGUF or Flux Hyper LoRA for your project!

  • @maxmad62tube
    @maxmad62tube Месяц назад

    I'm sorry but I'm getting an error message “4-bit quantization data type None is not implemented.” Can you help me?

    • @my-ai-force
      @my-ai-force  Месяц назад

      Thanks for reaching out! To better assist you with this error, could you please share a bit more detail? A screenshot of the terminal or any additional context about the error would be really helpful. This way, I can understand the issue more clearly and provide you with the best support possible!

  • @digitalface9055
    @digitalface9055 2 месяца назад +1

    missing nodes crashed my comfyui, won't start anymore

  • @manipayami294
    @manipayami294 2 месяца назад

    Can you do it with Flux GGUF verisons?

  • @DarioToledo
    @DarioToledo 2 месяца назад

    I didn't know of that Union repaint controlnet. What does that do?

    • @my-ai-force
      @my-ai-force  2 месяца назад

      It's used for inpainting.

    • @DarioToledo
      @DarioToledo 2 месяца назад

      @@my-ai-force and what difference does it make to usual inpainting without a controlnet? Tried to run it but gave me errors.

  • @baheth3elmy16
    @baheth3elmy16 2 месяца назад +1

    The flux model in your description is the wrong model. It is the 11gb model and it won't work in your workflow.

    • @Cluster5020
      @Cluster5020 2 месяца назад

      any other flux1-dev (e.g. the bnb) will do it as well?

    • @Cluster5020
      @Cluster5020 2 месяца назад

      nevermind, "aderek Flux v2" is working :)

  • @MatthewWaltersHello
    @MatthewWaltersHello 2 месяца назад

    I find it makes the eyes look like googly-eyes. How to fix?

  • @maelstromvideo09
    @maelstromvideo09 2 месяца назад

    try differensial diffusion it make inpainting better, without most of this ass pain.

  • @spelgenoegen7001
    @spelgenoegen7001 2 месяца назад

    Awesome! Everything works perfectly with diffusion_pytorch-model_promax.safetensors. Thanks!

  • @Thawadioo
    @Thawadioo 2 месяца назад

    ComfyUI is giving me dizziness