FLUX TOOLS - Run Local - Inpaint, Redux, Depth, Canny

Поделиться
HTML-код
  • Опубликовано: 24 ноя 2024

Комментарии • 72

  • @OlivioSarikas
    @OlivioSarikas  3 дня назад +1

    #### Links from my Video ####
    Get my Shirt with Code "Olivio" here: www.qwertee.com/
    blackforestlabs.ai/flux-1-tools/?ref=blog.comfy.org
    huggingface.co/black-forest-labs/FLUX.1-Canny-dev-lora
    huggingface.co/black-forest-labs/FLUX.1-Depth-dev
    huggingface.co/black-forest-labs/FLUX.1-Redux-dev
    huggingface.co/black-forest-labs/FLUX.1-Fill-dev
    comfyanonymous.github.io/ComfyUI_examples/flux/

    • @LouisGedo
      @LouisGedo 3 дня назад

      👋 hi

    • @Riker20
      @Riker20 2 дня назад

      i hate the spagetti program

    • @jonrich9675
      @jonrich9675 2 дня назад

      where to put the folders + maybe do a forge version?

  • @ericpanzer8159
    @ericpanzer8159 3 дня назад +7

    I would recommend lowering your Flux guidance and trying DEIS/SGM_uniform or Heun/Beta to reduce the plastic skin appearance. The default guidance for Flux in sample workflows is *way* too high. For example 3.5 is the default but 1.6-2.7 yields superior results.

    • @jorolesvaldo7216
      @jorolesvaldo7216 День назад

      Yeah, but just clarifying that it is usually better for REALISTIC prompts. With vector, anime and flatter styles, keep guidance higher (like 3.5) in order to avoid unwanted noise. Just in case someone reading this gets confused

    • @ericpanzer8159
      @ericpanzer8159 День назад

      @@jorolesvaldo7216 your point is well taken! And the opposite is true for painting styles. Flux is weird these ways :P

  • @asdfwerqewsd
    @asdfwerqewsd 2 дня назад +2

    Are there going to be GGUF versions of these models?

  • @David.Charles.
    @David.Charles. 3 дня назад +3

    I just noticed Olivio has a mouse callus. It is a true badge of honor.

  • @user-hi3ke6qh7q
    @user-hi3ke6qh7q 3 дня назад

    Those tabs and that mask shape was wild. Thanks for the info :)

  • @KDawg5000
    @KDawg5000 3 дня назад

    Finally playing w/this a bit. I wish the depth map nodes would keep the same resolution as the input image. I'm sure I could just use some math nodes to do that, but seems like it would be automatic, or a checkbox on the node. This matters in these setups because the input controlnet image (depth/canny) drives the size of the latent image, thus the size of your final image.

  • @zebmac
    @zebmac 2 дня назад

    Great video! Redux, Depth and Canny (have not tried Fill yet) works with Pixelwave model too.

  • @gimperita3035
    @gimperita3035 2 дня назад

    I'm using SD 3.5 L for the Ultimate Upscaler - with a detailer Lora - and it works fantastic!

  • @alpaykasal2902
    @alpaykasal2902 3 дня назад

    GREAT t shirt.... and episode, as always.

  • @stefanoangeliph
    @stefanoangeliph 2 дня назад +1

    I have been testing the Depth lora, but the output is very far from the input image. Does not seem to work as the controlnet depth does. Even in your video, the 2 cars have similar position, but they are not sharing the same "depth": the input red car is seen from a higher position than the output one. In my test (a bedroom) the output image sometimes is "reversed". Is this expected? Does it mean that these two Canny and Depth are far from how ControlNet works?

  • @Skettalee
    @Skettalee 3 дня назад +1

    Great video and I want you to know that I really like your shirt!

    • @OlivioSarikas
      @OlivioSarikas  3 дня назад +1

      thank you :) i put a link to it in my info :)

    • @Skettalee
      @Skettalee 3 дня назад

      @@OlivioSarikas Seen that, gonna get me one too!

  • @middleman-theory
    @middleman-theory 3 дня назад

    My dawg, that shirt. Love it.

  • @AlexeySeverin
    @AlexeySeverin 3 дня назад

    Thanks for sharing! That's great news! Let's see if native control nets work better... As it usually happens with FLUX, some things just don't seem to make a lot of sense... Like what on Earth is with Flux Guidance 10? Or 30??! Also, why do we need a whole 23GB separate model just for the inpainting (which we can already do with the help of masking and differential diffusion anyways). Why? So many questions, Black Forest Labs, so many questions...

    • @AlexeySeverin
      @AlexeySeverin 3 дня назад

      I edited my reply because I realized there's also a Lora for depth, so, my bad. But the rest is still valid, why does Flux have to be so wild?? :)))

  • @geyck
    @geyck 3 дня назад +2

    Can you do OpenPose yet for Flux-Forge?

  • @Zegeeye
    @Zegeeye 2 дня назад

    For making the REDUX model to work you have to add a node to control the amount of strength.

  • @yapyh2872
    @yapyh2872 День назад

    Do you think there will have a GGUF version of the regular Flux model with the inpaint thing in future for low VRAM users?

  • @therookiesplaybook
    @therookiesplaybook День назад

    Is there a way to adjust the depth map so comfy doesn't take it so literally, and how do you set up a batch of images so you don't have to do on at a time?

  • @ian2593
    @ian2593 2 дня назад +1

    Mine threw up an error when running through Canny Edge but not with Depth Anything. If I disconnect it, run the process once and then reconnect/run, it works. Says I'm trying to run conflicting models the first time but everything exactly matches what you're running. Just letting other who might have the same issue what to do.

    • @SenshiV
      @SenshiV 2 дня назад

      Got this and your share helped, thanks.

  • @FrankWildOfficial
    @FrankWildOfficial 3 дня назад +1

    Can we use the inpainting model together with lora trained on regular dev model?
    This would be game changer because like this 2 consistent unique characters in one image would be possible 🥳

    • @Elwaves2925
      @Elwaves2925 3 дня назад

      I don't know but it's definitely worth a try. Just a pity it requires the full model.

    • @OlivioSarikas
      @OlivioSarikas  3 дня назад +1

      I haven't tried it, but i don't see why this shouldn't work

    • @Darkwing8707
      @Darkwing8707 3 дня назад

      @@Elwaves2925 It doesn't. You can convert it to fp8 yourself or grab it off of civitai.

  • @FlyingCowFX
    @FlyingCowFX 2 дня назад

    I am seeing very grainy results with the flux fill model for inpainting, wonder if its my settings or the model

  • @tats5850
    @tats5850 2 дня назад

    Thank you for the video. The inpaint looks promising. Do you think the 24GB inpainting model will work with a 4060Ti (16GB of VRAM) ?

  • @therookiesplaybook
    @therookiesplaybook 2 дня назад +2

    What am I missing. The output image doesn't match the input at all when I do it.

    • @stefanoangeliph
      @stefanoangeliph 2 дня назад

      Same here... Depth and Canny seem not to work like a controlnet. I am confused.

    • @therookiesplaybook
      @therookiesplaybook День назад

      @@stefanoangeliph I updated Comfy and it's working now.

  • @LydianMelody
    @LydianMelody 3 дня назад

    I need that shirt :O (edit: oh hello link! Thanks!!!)

  • @KK47..
    @KK47.. 3 дня назад

    Thank you Again, OV

  • @CHATHK
    @CHATHK 3 дня назад

    On time!!

  • @tukanhamen
    @tukanhamen День назад

    I'm getting the shapes cannot be multiplied error for some reason and I don't know why I have everything set up properly.

  • @BillyNoMate
    @BillyNoMate 18 часов назад

    Can someone use flux redux and churn out some images of Jaguar cars with their new ad theme merged in? Very curious.

  • @researchandbuild1751
    @researchandbuild1751 2 дня назад

    How did you know you need a visual CLIP model?

  • @Osama-xs8cl
    @Osama-xs8cl 2 дня назад

    hallow Olivio , What is the minimum GPU VRAMs that can run Flux on ComfyUI ?

  • @AdvancExplorer
    @AdvancExplorer 3 дня назад

    Is it working with GGUF flux models ?

  • @FusionDeveloper
    @FusionDeveloper 3 дня назад

    Great video.

  • @mateuszpaciorek7219
    @mateuszpaciorek7219 3 дня назад

    Where i can find all workflows that you're using in this video?

  • @Gli7chSec
    @Gli7chSec 3 дня назад +10

    I just want video generation in forge FLUX

  • @mikrobixmikrobix
    @mikrobixmikrobix 2 дня назад

    I have problems with installing many nodes (Depth Anything). Let me know what version of Python you use? I have 3.12 included with Comfy and I often have this exact problem.

    • @OlivioSarikas
      @OlivioSarikas  2 дня назад +1

      comfy is selfcontaint, meaning it comes with the correct python it needs. however if you have run it for a long time, i would rename the comfy folder and download it fresh. you need to reinstall all custom node packs and move the models over, but it is worth it

    • @mikrobixmikrobix
      @mikrobixmikrobix 2 дня назад

      @@OlivioSarikas hmm...its new installation and its give me "AttributeError: module 'pkgutil' has no attribute 'ImpImporter'" error GPT says its because i should use python 3.10

    • @OlivioSarikas
      @OlivioSarikas  2 дня назад +1

      @@mikrobixmikrobix best ask in my discord. i'm not good at tech support and ask there often myself

  • @jaywv1981
    @jaywv1981 3 дня назад

    Using that same workflow for inpainting, im getting error that its missing noise input.

  • @jiexu-j9w
    @jiexu-j9w 3 дня назад

    does Redux work with gguf q4 version ? as i only has 8g Vram.

  • @bause6182
    @bause6182 3 дня назад

    Can you run this with 12gb vram with gguf q4 flux ?

  • @Kvision25th
    @Kvision25th 2 дня назад

    Flux is so all over the place :/ guidance 30 :D

  • @CHATHK
    @CHATHK 3 дня назад

    6:11 im not sure whats wrong but Redux output image comes out blurry

  • @486DX
    @486DX 3 дня назад

    2:10 What is "fp8_e4m3fn_fast" and where can i download?

    • @OlivioSarikas
      @OlivioSarikas  3 дня назад

      did you update your comfyui? for me it was just there.

  • @forgottenwisdoms
    @forgottenwisdoms 3 дня назад

    easiest way to run flux on mac in comfy?

  • @Showbiz_Stuff
    @Showbiz_Stuff 3 дня назад

    What about Forge integration?

  • @blutacckk
    @blutacckk 3 дня назад

    Would my 3070 8gb be able to run flux?

    • @OlivioSarikas
      @OlivioSarikas  3 дня назад +1

      i was told yes. you might need a guf model though that has to go into the unet folder and needs the unet loader. but better ask in my discord

    • @CHATHK
      @CHATHK 3 дня назад

      @@OlivioSarikas what about 3080ti 12gig

  • @MillennialKiwiGamer
    @MillennialKiwiGamer День назад

    comfyui AGAIN

  • @thedevilgames8217
    @thedevilgames8217 3 дня назад

    why everything is comfy

    • @OlivioSarikas
      @OlivioSarikas  3 дня назад +1

      best it get's everything first and is the best ui to try new things

  • @bobobaba2080
    @bobobaba2080 2 дня назад

    I get this error "CLIPVisionLoader
    Error(s) in loading state_dict for CLIPVisionModelProjection:" while loading clip vision, even though I downloaded this fild (siglip-so400m-patch14-384.safetensors) 3.4 GB and this file (sigclip_vision_patch14_384.safetensors) 836 MB and placed them in my ComfyUI\models\clip_vision directory, anyone know what I should do?

  • @tony178yt
    @tony178yt Час назад

    Is it working with GGUF flux models ?