Flux ControlNet (Depth, Canny, Hed) - Work 100%

Поделиться
HTML-код
  • Опубликовано: 11 сен 2024
  • Flux ControlNet supports 3 models:
    1- Canny
    2- HED
    3- Depth (Midas)
    Each ControlNet is trained on 1024x1024 resolution. However, It recommend to generate images with 1024x1024 for Depth, and use 768x768 resolution for Canny and HED for better results.
    Install X-flux-comfyui custom node:
    github.com/XLa...
    After the first launch, the ComfyUI/models/xlabs/loras and ComfyUI/models/xlabs/
    controlnets folders will be created automatically.
    Download flux controlnet model collection:
    huggingface.co...
    ***********************************
    Comfyui tutorial, Учебное пособие по Comfyui, Comfyui ट्यूटोरियल, Tutoriel Comfyui, Tutorial Comfyui, Comfyui 튜토리얼
    Comfyui stable diffusion, Install comfyui, comfyui video, controlnet comfyui, comfyui animateddiff, comfyui sdxl, comfyui upscale, comfyui video to video, comfyui manager, comfyui inpainting, comfyui ipadapter, comfyui faceswap
    ***********************************
    🤯 Get my FREE comfyui tutorials with workflows: openart.ai/wor...
    • CG TOP TIPS - AI MUSIC
    / @cgtoptips
    ------------------------------------
    🌍 SOCIAL
    / cgtoptips
    / cgtoptips
    📧 cg.top.tips@gmail.com
    ------------------------------------
    #ComfyUI
    #Flux
    #FluxControlNet

Комментарии • 36

  • @marcihuppi
    @marcihuppi 28 дней назад +4

    Error occurred when executing XlabsSampler:
    'ControlNetFlux' object has no attribute 'load_device'
    i already did a git pull to update comfyui... any other ideas?
    thanks in advance ♥

  • @VaradRane-p2q
    @VaradRane-p2q 28 дней назад +1

    Can we use this with inpainting techniques ? Is there any workflow for it in ComfyUI ?

  • @senoharyo
    @senoharyo 28 дней назад

    Very nice, have tried it last night. Are these controlnets works with flux Checkpoint work flow? :)

  • @roylow1292
    @roylow1292 23 дня назад +2

    VAE Docode eror Error occurred when executing VAEDecode:
    Given groups=1, weight of size [4, 4, 1, 1], expected input[1, 16, 128, 128] to have 4 channels, but got 16 channels instead
    File "D:\ComfyUI-aki-v1.3\execution.py", line 316, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
    File "D:\ComfyUI-aki-v1.3\execution.py", line 191, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
    File "D:\ComfyUI-aki-v1.3\execution.py", line 168, in _map_node_over_list
    process_inputs(input_dict, i)
    File "D:\ComfyUI-aki-v1.3\execution.py", line 157, in process_inputs
    results.append(getattr(obj, func)(**inputs))
    File "D:\ComfyUI-aki-v1.3
    odes.py", line 284, in decode
    return (vae.decode(samples["samples"]), )
    File "D:\ComfyUI-aki-v1.3\comfy\sd.py", line 322, in decode
    pixel_samples[x:x+batch_number] = self.process_output(self.first_stage_model.decode(samples).to(self.output_device).float())
    File "D:\ComfyUI-aki-v1.3\comfy\ldm\models\autoencoder.py", line 199, in decode
    dec = self.post_quant_conv(z)
    File "D:\ComfyUI-aki-v1.3\python\lib\site-packages\torch
    n\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
    File "D:\ComfyUI-aki-v1.3\python\lib\site-packages\torch
    n\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
    File "D:\ComfyUI-aki-v1.3\comfy\ops.py", line 93, in forward
    return super().forward(*args, **kwargs)
    File "D:\ComfyUI-aki-v1.3\python\lib\site-packages\torch
    n\modules\conv.py", line 460, in forward
    return self._conv_forward(input, self.weight, self.bias)
    File "D:\ComfyUI-aki-v1.3\python\lib\site-packages\torch
    n\modules\conv.py", line 456, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,

  • @dameguy_90
    @dameguy_90 28 дней назад +3

    Thank you very much for the tutorial, but I am not getting the same quality as yours, just very, very poor quality pictures. And my flux model keeps drawing only animations and not real. Is there a solution?

    • @Huang-uj9rt
      @Huang-uj9rt 28 дней назад +2

      Yes, I also manipulated it after watching this video to get a terrible image, not as good as the one I got after running flux on mimicpc. I think I'm going to be a big fan of mimicpc from now on, it has all the popular AI tools that I can try for free!

    • @CgTopTips
      @CgTopTips  28 дней назад +2

      The difference is likely in your settings. Please go to the x-flux-comfyui folder and try the company's pre-built workflows for Canny,Depth, and HED with the default settings

    • @plainpixels
      @plainpixels 24 дня назад

      Still seems to suck unless you use the same type of images as their examples

    • @adriands8207
      @adriands8207 20 дней назад

      @@CgTopTips what settings you mean? we are just following all steps and settings in the video but the results are awful

  • @eltalismandelafe7531
    @eltalismandelafe7531 23 дня назад

    In the node Canny Edge you have set the resolution to 768. My image is 1280 x 720, how can I set the resolution of the Canny Edge to 1280 x 720 or to get a 1280 x 720 image?

  • @antoinesaal2372
    @antoinesaal2372 18 дней назад

    Help ; Error : No module named 'controlnet_aux' (for img2img)
    (My comfyUI, custom_nodes are updated. I watched different controlnet tutos.)
    (I'm using Flux1-dev-Q4_K_S + comfyUI + Controlnet)
    There is a error during the node "Canny edge" : No module named 'controlnet_aux'
    I installed everything that I need (ComfyUImanager, Xlabs, ControlNet auxiliary models, Controlnet Canny) I don't understand why this doesn't work.
    If you have a solution :)

  • @anagnorisis2024
    @anagnorisis2024 25 дней назад

    is there a workflow based on this, where i can input an image and do a style or composition transfer?

  • @kevinwang7340
    @kevinwang7340 27 дней назад

    the workflow is fine but the result is very off from your samples, it seems like it does not interpret well the inputs and gives you weird images.

  • @CasasYLaPistola
    @CasasYLaPistola 28 дней назад

    Thanks for the video. One question, does it only work with the Dev model? Does it not work with the schnell model?

    • @CgTopTips
      @CgTopTips  28 дней назад +1

      Yes, both the Schnell and Dev models work fine

  • @VazgenAkopov1976
    @VazgenAkopov1976 28 дней назад +1

    It's a pity, but it DOESN'T WORK on 32 gigabytes of RAM and 8 gigabytes of video card memory!(((

    • @CgTopTips
      @CgTopTips  27 дней назад +2

      Yes, at least 12gb 😕

  • @cameochan7405
    @cameochan7405 22 дня назад

    AttributeError: 'DoubleStreamBlock' object has no attribute 'processor'
    what's this mean pls?

  • @CGFUN829
    @CGFUN829 28 дней назад

    Thanks, what resolution you recomend when doing animation using sd1.5 , depth , canny ?

  • @yaahooali
    @yaahooali 28 дней назад

    Thank you

  • @李云-f1b
    @李云-f1b 21 день назад

    Very good,

  • @brunocandia9671
    @brunocandia9671 27 дней назад

    ThX!

  • @wowforeal
    @wowforeal 28 дней назад +1

    Work w NF4?

  • @kallamamran
    @kallamamran 28 дней назад +1

    Thanks, but... !!! Exception during processing!!! Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)

    • @tetsuooshima832
      @tetsuooshima832 27 дней назад

      I had the exact same error, but today ComfyUI has been updated to support Flux controlnets, so hopefully we don't need this anymore

  • @shadowheg
    @shadowheg 27 дней назад

    dont working on Dev fp32 with 32gb ram, 4080 12gb VRAM, but working on NF4. but result unsuccessfu (

  • @joneschunghk
    @joneschunghk 28 дней назад

    You are installing "requirements.txt" to your python, not python_embeded of comfyui.

    • @CgTopTips
      @CgTopTips  28 дней назад

      follow x-flux-comfyui installing instruction on githhub page

    • @davoodice
      @davoodice 28 дней назад

      Yes .your way is not for comfyui portable. ​@@CgTopTips

  • @RiiahTV
    @RiiahTV 28 дней назад

    its like that!

  • @davoodice
    @davoodice 28 дней назад

    Installation of the package x-flux is not correct. You installed xflux in stand alone python not in comfyUI portable python.

  • @MilesBellas
    @MilesBellas 28 дней назад +1

    it works !?!😊

    • @CgTopTips
      @CgTopTips  28 дней назад +1

      I'm glad you were able to get a result. This workflow need more VRAM to avoid the "Cuda Out of Memory" error

  • @ismgroov4094
    @ismgroov4094 28 дней назад

    ❤😅