Flux Workflows: Updated Models, ControlNet & LoRa in ComfyUI

Поделиться
HTML-код
  • Опубликовано: 10 сен 2024

Комментарии • 108

  • @SebAnt
    @SebAnt Месяц назад +6

    Thanks for testing the cutting edge models and keeping us up to date. 🙏🏽

  • @pixelzen007
    @pixelzen007 Месяц назад +1

    Excellent work mate! Clean, concise and right to the point. I rarely comment on youtube videos. But you deserve a comment. Well done and keep up!

  • @Gabriecielo
    @Gabriecielo Месяц назад

    Well explained and caught up all latest info as usual, thanks a lot!

  • @tveerco6800
    @tveerco6800 Месяц назад

    Just want I needed, thanks so much!

  • @Radarhacke
    @Radarhacke Месяц назад +2

    Thank You very Much!

  • @PyruxNetworks
    @PyruxNetworks Месяц назад +1

    you don't need to download again if you already have fp16 models. you can use flux model merger node and create fp8 checkpoint.

  • @Archalternative
    @Archalternative Месяц назад

    Excellent video:) Bravo.

  • @YTbannedme-g8x
    @YTbannedme-g8x Месяц назад +2

    Im using a guidance scale of 2.5 for realism. I find that the default 3.5 scale makes the skin look like it has a plastic HDR sheen to the skin. The higher the scale the less realism. But like you said, a lower scale sacrifices prompt adherence.

    • @bgtubber
      @bgtubber Месяц назад

      Yes, there's a sacrifice there. I'm thinking of a way to compensate for this. So my idea is, what if you leave guidance scale at 3.5 and use the DynamicThreasholdingFull node with mimic_scale set to 2? Have you tried this?

    • @CodeCraftersCorner
      @CodeCraftersCorner  Месяц назад +1

      Thanks for sharing, both of you! I will give these numbers a try.

  • @DanielSchweinert
    @DanielSchweinert Месяц назад +1

    Change the values in the ModelSamplingFlux for Max Shift 0.5 and Base Shift 0.3 this will give you also more photorealism. And of course you can also lower FluxGuidance 1.8 - 2.3

    • @CodeCraftersCorner
      @CodeCraftersCorner  Месяц назад

      Thanks for sharing, I will give it a try!

    • @chiptaylor1124
      @chiptaylor1124 29 дней назад

      I tried these settings, and it just made the image blurry? Any hep greatly appreciated. Thank you

    • @CodeCraftersCorner
      @CodeCraftersCorner  29 дней назад +1

      @chiptaylor1124 If it helps, I am using the default values with more emphasis on the prompt.

    • @chiptaylor1124
      @chiptaylor1124 20 дней назад

      @@CodeCraftersCorner Thank you!

  • @user-lb6cy9sx3j
    @user-lb6cy9sx3j Месяц назад +1

    Nice Job! 👍

  • @2PeteShakur
    @2PeteShakur Месяц назад +5

    nice, but bro pls use night mode, those white site scenes are blinding me! ;)

    • @CodeCraftersCorner
      @CodeCraftersCorner  Месяц назад +2

      Thanks for letting me know! My system automatically switches to light mode during the day and dark mode at night, so I didn’t realize this. I'll make sure to adjust it in future videos.

  • @marshallodom1388
    @marshallodom1388 4 дня назад

    Awesome video, thanks!
    I got a error about multiplying a floating point number (time_factor) by a nonetype (zero for time) which does not compute. Python 3.10 allows you to code Int | none but should be coded as Union[A, B] : return which will return actual numbers.
    Simply adding a ConditioningSetTimestepRange of course didn't work.

    • @CodeCraftersCorner
      @CodeCraftersCorner  2 дня назад

      Thank you for the comment! I haven’t encountered this error myself, but if I find a solution, I’ll be sure to post it here.

  • @mehmetalirende
    @mehmetalirende 17 дней назад

    can you please explain if we have 2 loras, and they are different persons, how can we implement, it without effecting each other. lets think like a couple picture?

    • @CodeCraftersCorner
      @CodeCraftersCorner  13 дней назад

      Hello, maybe try with a lora stack node. There are plenty in the manager. You will have to test out which settings will work. It still early with the flux and loras.

  • @sunlightlove1
    @sunlightlove1 Месяц назад

    thanks again man

  • @rubenrodenascebrian3855
    @rubenrodenascebrian3855 Месяц назад +1

    Great video! Could you upload your workflows? Thank you very much.

    • @CodeCraftersCorner
      @CodeCraftersCorner  Месяц назад

      Thank you! In the description of the video, there is a section "Resources". Click on the updated workflows link. You can then drag either the dev or schnell image into ComfyUI to load the workflow.

  • @zdwork
    @zdwork 27 дней назад

    Could you please write a tutorial on using PULID with FLUX🙏🙏🙏

    • @CodeCraftersCorner
      @CodeCraftersCorner  26 дней назад

      Hello, currently PuLID is only for SDXL models. Once we get support for FLUX, I will do one.

  • @NeonXXP
    @NeonXXP Месяц назад

    Is this only working on FP8? im using FP16 with a 3090 and its throwing errors that I don't understand

  • @Atenea-u4x
    @Atenea-u4x 28 дней назад

    Hello,
    can you tell me why my flux shchnell is a .sft? I put it in models/unet and I dont understand why my schnell is sft and yours is not, also it will be different from the dev1.

    • @CodeCraftersCorner
      @CodeCraftersCorner  28 дней назад

      Hello, the .sft only has the UNET component. To get the safetensors one, go into the description of the video and click on the "Updated workflows" link. This will take you to a post. The first line where it says "checkpoint for the Flux dev here", click on the "here" link and download the safetensors file. Place them in the models > checkpoints folder.

    • @Atenea-u4x
      @Atenea-u4x 27 дней назад

      @@CodeCraftersCorner thank you for the detailed answer, please tell me what means I only have the unet component? I got the files from a workflow for flux the first days of release and I dont know very well what im doing. I really appreciate to understand a bit more.
      Thank you !

    • @CodeCraftersCorner
      @CodeCraftersCorner  27 дней назад

      @@Atenea-u4x There's two ways of using the Flux models. The sft files you have require a specific workflow. I made a video explaining about them. I think it would be easier to follow the video than explaining it with text. You can watch it here: ruclips.net/video/OaS9hFD_xz8/видео.html

  • @renzocheesman6844
    @renzocheesman6844 25 дней назад

    can i simply use a custom made Lora inside comfyui with flux? im a noob

  • @trizais
    @trizais Месяц назад

    what is that ide that you use with the icons on the left? can you tell me please where I can download it?

    • @CodeCraftersCorner
      @CodeCraftersCorner  Месяц назад +2

      It is the Edge browser. I changed the setting to show tabs on the left side instead of at the top.

  • @muhammadidrees9481
    @muhammadidrees9481 Месяц назад

    The workflow is not working showing me error "failed to fetch". How to solve this problem

    • @CodeCraftersCorner
      @CodeCraftersCorner  Месяц назад

      Hello, you can try to update your ComfyUI. If you have the portable version, go into the updates folder and open the update_comfyui_and_dependencies.bat file. Once completed, start ComfyUI and try again. Updating through the Manager alone seems to not work well.

  • @carterknudsen525
    @carterknudsen525 Месяц назад

    Great video -- thanks!
    I'm having an issue running this workflow; I get this error: "ERROR: Could not detect model type of: C:\comfyui\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\models\checkpoints\flux1-dev-fp8.safetensors"
    Any idea how to fix this?

    • @CodeCraftersCorner
      @CodeCraftersCorner  Месяц назад +1

      Thank you. For the model error, you will need to download it and place it in the folder ComfyUI > models > checkpoints.

  • @pixelzen007
    @pixelzen007 Месяц назад

    The new models that are half in size and supposed to be faster is taking much longer than the original 22 GB version (dual clip workflow). The original version takes 2 mins per image in my PC but the new supposedly faster version takes about 15-20 mins. Something is really off here. But not sure what. My comfyui is uptodate. I used the same workflow you showed in the video. Why do you think this is?

  • @BibhatsuKuiri
    @BibhatsuKuiri Месяц назад

    I don't know why but my generated image looks grainy. I am following the exact workflow that is given in the website.anyonefacing the same issue,

  • @voxyloids8723
    @voxyloids8723 Месяц назад

    I have the same mice arrow color )

  • @Maeve472
    @Maeve472 Месяц назад

    What extension you are using to preview nodes?

    • @CodeCraftersCorner
      @CodeCraftersCorner  Месяц назад +1

      It's a ComfyUI update. I made a video on it and explain how to set it up here: ruclips.net/video/8an9mkpDS2o/видео.html

  • @Decay3333
    @Decay3333 29 дней назад

    can you use it with schnelle?

  • @nomorejustice
    @nomorejustice Месяц назад

    keep up the good work man! is the flux dev version work for low vram with 512 size?

    • @CodeCraftersCorner
      @CodeCraftersCorner  29 дней назад +1

      Thank you! Yes, I am using it with 4GB of VRAM generating images at 1024 size.

  • @jonmichaelgalindo
    @jonmichaelgalindo Месяц назад

    There's a negative prompt workflow on Civitai now. 🙂

  • @catskingdom7619
    @catskingdom7619 Месяц назад

    Can you please make a video on how to make a professional profile pic using flux. You may replace the face from a professional ai image.

  • @Maeve472
    @Maeve472 Месяц назад

    lora key not loaded?

    • @CodeCraftersCorner
      @CodeCraftersCorner  Месяц назад +1

      Hello, can you try this one: flux_RealismLora_converted_comfyui. There is a link in the description of the video.

  • @chaitanyapatel-cj5cr
    @chaitanyapatel-cj5cr Месяц назад +1

    whats your pc specs?

  • @GamingLegend-uq3on
    @GamingLegend-uq3on Месяц назад

    Hello sir,
    Your videos are great and very informative. Sir i have a question and after reading in your bio that you are a python guy i'm pretty sure you will help me out. so i want to migrate my python installation from my c drive to another drive can you please help me out how can i do that without breaking everything like my comfyui installed in my system locally?
    Waiting for your comment

    • @CodeCraftersCorner
      @CodeCraftersCorner  29 дней назад +1

      Thank you! If you only want to move ComfyUI, then you can move then entire ComfyUI folder if you have the portable version. In case you installed ComfyUI manually, I think you will have to reinstall it on the other drive.

    • @GamingLegend-uq3on
      @GamingLegend-uq3on 29 дней назад

      @@CodeCraftersCorner i will give you my situation idea why I want this, i want to move python to another drive not cmfyui. So when I run comfyui on my laptop the problem is I have python in my c drive and when i load models and my c drive start doing reads and writes it gets hot and with it being hot my wifi card is right below it and heat causes my wifi to shut down itself randomly. So I'm wondering if I move python to another drive and that other drive get into the use then I don't have this problem as far as I am thinking. Let me know if it is possible .

    • @CodeCraftersCorner
      @CodeCraftersCorner  29 дней назад +1

      @GamingLegend-uq3on In this case, the problem might still persist. Python itself doesn't generate much heat, but image generation and heavy processing can cause your CPU/GPU to heat up. Moving Python to another drive won't change the fact that ComfyUI will still use your CPU/GPU for processing, which might lead to the same heating issue. But if you want to give it a try, you will have to reinstall python in the new drive and change in python path in the environment path.

  • @satishpillaigamedev
    @satishpillaigamedev Месяц назад

    Hey nice , What about the control net

    • @CodeCraftersCorner
      @CodeCraftersCorner  Месяц назад +1

      Hello. for now, there seems to be more issues with the controlnet than good results. You can monitor the progress by going into the description, under the resources section, there is a link controlnet issues.

  • @tveerco6800
    @tveerco6800 Месяц назад

    I get a little more realistic image at denoise 0.9 instead of 1

    • @CodeCraftersCorner
      @CodeCraftersCorner  Месяц назад

      Interesting! Thanks for sharing. I will give this a try.

  • @LouisGedo
    @LouisGedo Месяц назад

    Hi

  • @KINGLIFERISM
    @KINGLIFERISM Месяц назад

    Anyone else got an error message when trying to use the controlnet?

    • @aiamonlylove
      @aiamonlylove Месяц назад

      AttributeError: 'NoneType' object has no attribute 'keys'

    • @CodeCraftersCorner
      @CodeCraftersCorner  29 дней назад

      Yes, it seems it is broken. They made a code branch separated from the main branch. It seems more experimental for developers for now. You can see the progress and how to get the controlnet branch from here: bit.ly/4cqA1dt

  • @user-li8xl5ew6k
    @user-li8xl5ew6k Месяц назад

    i think flux is not available for commercial activity , Right ? and this is from a privet entity and we are training their model / debugging / being habitual without having free and stable future of development even don't have commercial rights, right? so our time is not valuable or just some ytubrs making money by spreading this?

    • @CodeCraftersCorner
      @CodeCraftersCorner  Месяц назад +1

      The flux schnell is available for commercial activity. The dev model is under non-commercial research purposes only. Fun fact, the output (generated images) from the dev model can be used independently for commercial purposes.

  • @brianmolele7264
    @brianmolele7264 Месяц назад

    still requires insane VRAM ?

    • @CodeCraftersCorner
      @CodeCraftersCorner  Месяц назад

      Yes for fast generation. I am using GTX 1650 4GB VRAM and 32GB of RAM. It takes about 10 minutes with the schnell model. Dev model takes way too long though but both works.

  • @mayorc
    @mayorc Месяц назад

    No way of finetuning yet?

  • @nannan3347
    @nannan3347 21 день назад

    So many bad Flux tutorials on RUclips. Why not just include a link to the ComfyUI workflow in the description?

    • @CodeCraftersCorner
      @CodeCraftersCorner  21 день назад

      Thanks for watching! You may have missed it but there is a link in the video description under the [RESOURCES] section that says "Updated Workflows".

  • @Piotr_Sikora
    @Piotr_Sikora Месяц назад

    dev is better, but it can't be use commercial

    • @CodeCraftersCorner
      @CodeCraftersCorner  Месяц назад

      While the model itself is strictly for non-commercial research only, the images you generate with it are free to use for commercial purposes.

  • @user-sy9eq2vp9h
    @user-sy9eq2vp9h Месяц назад

    probably the main criticism on flux as of now is that it can't do realistic images very well. I tend to disagree, but I've got mixed results. anyway, judging on your screens it looks like that lora is either undertrained, or just garbage

    • @2PeteShakur
      @2PeteShakur Месяц назад +1

      hehe give it a chance, its very impressive so far, miles better than the crud of sd3, good blooming riddance! XD

    • @CodeCraftersCorner
      @CodeCraftersCorner  Месяц назад

      Yes, although it's not an all-in-one perfect model, flux is better than any previous models so far.

    • @reasonsreasonably
      @reasonsreasonably Месяц назад

      ​@CodeCraftersCorner "better" at text, and better at adhering to certain prompts, but simply can't do certain things that would seem basic. I don't want to go into it right now, but Flux is a good start but I'll take SDXL with all the tools we have until Flux has more lora and the like.

    • @AIImagePlaymates-r7h
      @AIImagePlaymates-r7h Месяц назад +1

      I disagree as well with those people, the images have come out looking very realistic. This is the most realistic that I've seen the images so far since I've started generating AI images. I can't see myself going back to anything outside of FLUX.

    • @2PeteShakur
      @2PeteShakur Месяц назад

      @@AIImagePlaymates-r7h definitely, images are awesome and when it comes to realism, i don't need to adjust any particular settings really, i just add more details to the prompt, it's great! :)

  • @tazztone
    @tazztone Месяц назад

    DoRa i read are better than LoRa. maybe a topic for a next video. cheers