HUNYUAN Video + WaveSpeed + Lora 🔥 - Increase Speed of Flux, LTX Video and Hunyuan (Just 8GB VRAM)

Поделиться
HTML-код
  • Опубликовано: 12 янв 2025

Комментарии •

  • @Moonluki
    @Moonluki 23 минуты назад

    Hi man!
    Thanks for the videos, I have been going through your videos from past month, I have learned quite a lot,
    but there is still something that I'm facing difficulty with, consistent releastic human models (a single model) with specific different poses, I was able to get the poses correct using controlnet but the human model is extremely inconsistent when it comes to its face and body shape.
    any suggestions?
    Thanks!

  • @rubi_t
    @rubi_t 57 минут назад

    Hi, can you do a video on the Triton installation on Windows? From everything I've seen online, it looks complicated. Thanks.

  • @ShubzGhuman
    @ShubzGhuman 11 часов назад

    well brother mai kal se espe try kr rha hoon and i have taken the its down to 42sec/20 steps :) no quality loss, even lora is working, im gonna share my workflow try it and test it.

    • @ShubzGhuman
      @ShubzGhuman 11 часов назад

      and my vram is 6gb rtx 4050

  • @dumperxo
    @dumperxo 12 часов назад

    can you do video to video workflow for hunyuan

  • @JamesPound
    @JamesPound 7 часов назад

    Should we be using the --lowvram ComfyUI launch parameter?

  • @Maartenalbers
    @Maartenalbers 10 часов назад

    It works for me , thank you! Could you also share the flux workflow?

    • @xclbrxtra
      @xclbrxtra  8 часов назад

      Sure I will, but you can integrate this in any flux workflow. Just add 'apply first block cache' node in this way
      Model - Apply First Block Cache - (whatever model was connected to)

  • @BBZ101
    @BBZ101 12 часов назад

    Can u do video how to install it locally plz

  • @pushpendraprasad3034
    @pushpendraprasad3034 10 часов назад

    Please integrate it with image to video

    • @ronbere
      @ronbere 9 часов назад

      Not possible on Hunyhian

  • @shivonviwe
    @shivonviwe 10 часов назад

    Can someone confirm if RTX 30series of 6 gb ram with 16 gb ram would it work?

    • @xclbrxtra
      @xclbrxtra  8 часов назад

      If you download the lowest quantized version (Q3ks) then it'll probably work but will be slow.

  • @WildHeroes007
    @WildHeroes007 13 часов назад

    first to view

  • @dsphotos
    @dsphotos 12 часов назад

    i still have the issue with gguf, but you dont really need it, i got hunyuan working now with the node "Load Diffiusion Model + hunyuan video t2v 720 fp8" weight type fp8:e4m3fn_fast & DualClipLoader Clip_L + Llava lama fp8_scaled, type hunyuan_video. -> the wavespeed still speeds up the process now even on my very old pc with only rtx 3060 12gb vram

    • @xclbrxtra
      @xclbrxtra  12 часов назад

      Yes fp8 will have better results than most GGUF (except Q8) 💯🔥

    • @rauliss1
      @rauliss1 9 часов назад

      how time take you for video ? i have a rtx 3060 12gb and I7 intel 11gen and take 50min to make a video with lora , i use hunyuan fp8 and llava f8 scaled all in fast

    • @dsphotos
      @dsphotos 8 часов назад

      thats far too long, do you have wavespeed running? i have a workflow without upscale where i can do 1 in 152 s with 20 steps