ComfyUI Adds Native HunYuan Video Generation Support: Is It Worth Trying?

Поделиться
HTML-код
  • Опубликовано: 23 дек 2024

Комментарии • 40

  • @Ollegruss_Music
    @Ollegruss_Music 3 дня назад

    Thanks for the video and for the links to resources.

  • @jorgemiranda2613
    @jorgemiranda2613 3 дня назад

    Thanks for sharing this !

  • @henkhbit5748
    @henkhbit5748 5 часов назад

    Thanks, try it and it was ok but does not follow the prompt completely. I think the 5 sec limitation impact following the prompt more correctly.

  • @Redtash1
    @Redtash1 3 дня назад

    Thanks for your videos. There is a workflow for video to video in custom nodes Hunyaun and example folder.

    • @CodeCraftersCorner
      @CodeCraftersCorner  День назад

      Yes, It requires the ComfyUI-HunyuanVideoWrapper by Kijai.

  • @xcom9648
    @xcom9648 2 дня назад

    There is a video to video workflow in the non official comfy version that was released a while back.

    • @CodeCraftersCorner
      @CodeCraftersCorner  День назад

      Yes, this is the repo: ComfyUI-HunyuanVideoWrapper by Kijai

  • @CryptoIndia9
    @CryptoIndia9 День назад +1

    I am trying it on my 4090 card with 24GB Vram for a 5 sec video , its taking 30-40 minutes..... but is it safe for my card as the requirement says 40GB or 80GB VRam ? Also sometimes im getting out of memory error on VAE Decode (Tiled) ....

    • @CodeCraftersCorner
      @CodeCraftersCorner  День назад +2

      Hello, this does not feel right. On 12 GB VRAM, it takes me 14 minutes to generate. Maybe you have the resolution too high or you are missing some dependencies. for VAE decode one, i have mine set up as tile size = 128 and overlap 32.

    • @CryptoIndia9
      @CryptoIndia9 5 часов назад

      ​@@CodeCraftersCorner I was using vae setting as 160/64 with 154 frames to be generated but now after changing vae to 128/32 its taking same 30-40 minutes on my 4090. Im using weight_dtype fp8_e4m3fn_fast ... all other setting same as given in workflow provided

    • @CryptoIndia9
      @CryptoIndia9 3 часа назад

      no in fact it took 1hr 20min.... I actually locked my PC and it was working in background, may be the reason behind it but anyway it was noway near to your time... now im trying same prompt using LTXV v0.9.1 to check the time to genarate....

    • @CryptoIndia9
      @CryptoIndia9 2 часа назад

      Using LTXV v0.9.1 it took just 24 secs on my 4090 to generate 153 frames of video... amazingly fast!!!

  • @armauploads1034
    @armauploads1034 3 дня назад

    Is there also the possibility of IMG2Video and can you please show a workflow for it? 🙂

    • @CodeCraftersCorner
      @CodeCraftersCorner  День назад

      Not with this model! If you can run the HunYanVideo Wrapper, then there is a workflow for it.

    • @Darkwing8707
      @Darkwing8707 День назад

      @@CodeCraftersCorner That method just uses llava to create a description of an image. It's not really I2V.

  • @Elektrashock1
    @Elektrashock1 3 дня назад

    Hunyuan Latent video node is missing. Updated Comfy. but not available.? Did you also install Hunyuan video wrapper?

    • @CodeCraftersCorner
      @CodeCraftersCorner  3 дня назад

      Hello, no need for custom nodes for this one. They are all native nodes (built in). Are you sure your ComfyUI is updated correctly. Try to manually update if you used the Manager.

    • @johnedwards7655
      @johnedwards7655 2 дня назад

      had the same problem - manually update with comfy update folder helped

    • @CodeCraftersCorner
      @CodeCraftersCorner  День назад

      @@johnedwards7655 Glad the manual method worked.

  • @nadora0
    @nadora0 3 дня назад

    there is a fp8 v of HunYuan video can i use it with this workflow ?

    • @CGFUN829
      @CGFUN829 3 дня назад +1

      There is a gguf of the model along with llama used with it.

    • @nadora0
      @nadora0 3 дня назад

      @@CGFUN829 ca u give me link for that plz ?

    • @CodeCraftersCorner
      @CodeCraftersCorner  3 дня назад

      Hello, this is the native ComfyUI implementation. You can get the GGUF version from the GitHub page.

  • @Elektrashock1
    @Elektrashock1 3 дня назад

    Updated Comfy but no Latent Video node?

    • @CodeCraftersCorner
      @CodeCraftersCorner  3 дня назад

      Okay, try to do this. In the ComfyUI folder, open a CMD / Terminal. Type git log and check if you have the commit 52c1d93. This was pushed yesterday (December 20th). It's possible your ComfyUI is not being updated correctly.

  • @giuseppedaizzole7025
    @giuseppedaizzole7025 3 дня назад

    having low-vram, why haven't you made a video using the GGUF models?

    • @CodeCraftersCorner
      @CodeCraftersCorner  День назад +1

      I was testing out to see if it can run on my system and I shared my results.

    • @giuseppedaizzole7025
      @giuseppedaizzole7025 День назад

      @@CodeCraftersCorner Next one GGUF...:) ..Thanks

    • @CodeCraftersCorner
      @CodeCraftersCorner  День назад +1

      @@giuseppedaizzole7025 Okay, I will check if i can run it and the quality. If good, i will make a video.

    • @giuseppedaizzole7025
      @giuseppedaizzole7025 День назад

      @@CodeCraftersCorner Great, really appreciate that u answer, thanks.

  • @nadora0
    @nadora0 3 дня назад

    and the ltx 0.9.1 here and give me error becuase vae and i test new vae and same issu

    • @PyruxNetworks
      @PyruxNetworks 3 дня назад

      update comfyui

    • @CodeCraftersCorner
      @CodeCraftersCorner  3 дня назад

      Hello, please update your ComfyUI to the latest version. You also download the latest copy from their GitHub if you do not want to update your current version.

  • @silentage6310
    @silentage6310 2 дня назад

    unfortunaly this is not used all gpu in pc.
    for 24gb its allow to render 480*272px (for x4 upscale to FHD) and 241 frames (10sec).

  • @AgustinCaniglia1992
    @AgustinCaniglia1992 3 дня назад +1

    This modelo does NOT do img2vid

    • @CodeCraftersCorner
      @CodeCraftersCorner  День назад

      Yes, not for now. It's in their plan to have an Image-to-Video Model.

  • @andresz1606
    @andresz1606 2 часа назад

    Certainly not with less than 24GB VRAM. The VAE Decode will fail if your card can't handle the sampled video, but only after wasting a great deal of time with all the previous nodes, making it twice as useless. Don't even bother with less than 24 or 40GB VRAM.

  • @pwknai
    @pwknai 3 дня назад

    There is one nice solution to solve all this complicated V-Ram problem simply.
    Let's buy 'Nvidia H100' we all! (...when our yearly income have reached million dollars T_T)

    • @CodeCraftersCorner
      @CodeCraftersCorner  День назад +1

      I'm afraid that's not a realistic solution for most of us!