3 FREE Ways to Style Videos with Comfy UI in 2024!

Поделиться
HTML-код
  • Опубликовано: 24 дек 2024

Комментарии •

  • @mhfx
    @mhfx  Месяц назад +4

    Diffutoon (speed) vs FastBlend (Balance) vs AnimateDiff (Detail) -- which style has the most potential to you?

    • @ammarzammam2255
      @ammarzammam2255 Месяц назад

      Please share your workflow because the actions that you do are so fast I'm trying hard to follow your instructions

  • @joshuamcdermott6002
    @joshuamcdermott6002 Месяц назад +2

    Love the detail, not just in the video, but in the description telling where to save files and such. Really grateful for this :)

    • @mhfx
      @mhfx  Месяц назад

      Ah thank you so much, I really appreciate that 🤘🤘

  • @sudabadri7051
    @sudabadri7051 Месяц назад +1

    Great work

    • @mhfx
      @mhfx  Месяц назад +1

      Really appreciate that, thank you 🙏✌️

  • @ian2593
    @ian2593 Месяц назад +1

    Nice process. How many frames max can you convert with this process? Limited to 32 or 48 frames? Thanks.

    • @mhfx
      @mhfx  Месяц назад +1

      It depends on your hardware. I think my box crashed on a 13 second clip, but I was able to do up to 7 seconds without any issues. In-between 7 and 13 it depended on resolution size and settings

  • @ian2593
    @ian2593 Месяц назад

    Last question. When you were using the smooth node (2nd last example). On my system the first video combine was sharp but the smoothed one was very blurry. Is there a setting to sharpen it up a bit (or is that just what happens to low rez videos)?

    • @mhfx
      @mhfx  Месяц назад +1

      No problem. This is the main balance you need to find with the smooth video node. It could be your input is too low res, it could be that there's too much of a difference between your ksampler output and your input image, so it appears blurry. It could also just be you need to tweak the settings in the smooth video node. Something with high motion for example will need tweaks to the example settings. I think those are good places to start testing first tho

  • @ian2593
    @ian2593 Месяц назад

    where do you get the specific idapter files from and where do you place them? Thanks.

    • @mhfx
      @mhfx  Месяц назад

      these are just the standard ipadapter models, nothing special. You can install ipadapter from the manager. The github page also has a whole tutorial specifically on how to use it from latent vision -- check it out here: github.com/cubiq/ComfyUI_IPAdapter_plus?tab=readme-ov-file

  • @tanyasubaBg
    @tanyasubaBg Месяц назад

    thank you for the video and all the information but I had an issue at the start. it showed me that
    a custom node for DiffSynth-Studio
    (IMPORT FAILED). what I have to do with it. thank you in advance

    • @mhfx
      @mhfx  Месяц назад

      Ah -- it should not be an issue, just continue as normal. I'm not 100% sure why it gives this notice, but it is updating one (possibly more) nodes from the other install. Perhaps there's no actual import just a code update 🤷‍♂️ Either way it will not affect your workflow

    • @tanyasubaBg
      @tanyasubaBg Месяц назад

      @@mhfx thank you. can you tell me where I can find that control net. is this named controlnet11Models_tileE.safetensors right version.

    • @tanyasubaBg
      @tanyasubaBg Месяц назад

      @@mhfx Thank you. Could you help me find the right ControlNet model? I used this one from Hugging Face: ControlNet-v1-1, but I keep having the same issue. It tells me to load the model in ComfyUI_windows_portable\ComfyUI\models\controlnet\, even though the file is already there.

    • @mhfx
      @mhfx  Месяц назад

      So there's a few controlnets. Is this for the diffutoon workkflow? You'll need the tile and line art controlnets. Make sure you grab the pth files. If you're downloading from hugging face it should be these huggingface.co/lllyasviel/ControlNet-v1-1/tree/main

  • @pedronataniel3151
    @pedronataniel3151 23 дня назад

    When installing DiffSynth as you described, I encounter an error ("IMPORT FAILED"), and the nodes remain red. Even if I press the "Try Fix" button, it doesn't import. I’d really like to give it a try. Do you have any idea how to resolve this issue? Thanks so much!

    • @mhfx
      @mhfx  23 дня назад

      Ya no prob, just ignore this and proceed as usual, it will still work ✌️✌️

    • @pedronataniel3151
      @pedronataniel3151 23 дня назад

      ​@@mhfx But if the nodes still remain red, I cannot input anything. How am I supposed to proceed?

    • @mhfx
      @mhfx  23 дня назад

      O red nodes are something else. Does the import missing nodes button work? If not you may have an outdated comfyui, try "update all" in the manager. If both of those don't work you can try to install manually using git clone.

  • @CHATHK
    @CHATHK Месяц назад +1

    Whats your vram and how long it took ?

    • @mhfx
      @mhfx  Месяц назад

      I have an rtx4090, diffutoon takes a few min approx 1-2, fast blend takes about 4-5 and animate different takes around 10-20min

    • @Darkwing8707
      @Darkwing8707 14 дней назад

      @@mhfx And this is for 1 second of video right?

  • @yangli1437
    @yangli1437 Месяц назад +1

    Plese more videos.

    • @mhfx
      @mhfx  Месяц назад

      😁😁👍👍

  • @Djonsing
    @Djonsing 20 дней назад

    *And vice versa? Here you can turn videos generated in an editor (for example UE5) into photorealistic ones?*

    • @mhfx
      @mhfx  20 дней назад +1

      If you use a realistic lora you can. This only works for the last 2 options though since the diffutoon workflow doesn't seems to be inconsistent with different loras.