ComfyUI: Master Morphing Videos with Plug-and-Play AnimateDiff Workflow (Tutorial)

Поделиться
HTML-код
  • Опубликовано: 14 май 2024
  • Push your creative boundaries with ComfyUI using a free plug and play workflow! Generate captivating loops, eye-catching intros, and more! This free and powerful tool is perfect for creators of all levels.
    Chapters:
    00:00 Sample Morphing Videos
    01:15 Downloads
    02:09 Folder locations
    02:14 Workflow Overview
    04:10 Generating first Morph
    04:40 Running the Workflow
    04:47 Quick bonus tips
    06:35 Supercharge the Workflow
    08:58 Getting more variation in batches
    10:31 Scaling up
    10:59 Scaling up with model
    11:35 This is pretty cool
    I'll show you how to make morphing videos and use images to create stunning animations and videos,
    You'll also learn how to use text prompts to morph between anything you can imagine!
    Plus there are some valuable tips and tricks to streamline the comfyui morphing video workflow and save time while creating your own mind-bending visuals.
    #########
    Links:
    ########
    Workflow: Morpheus Modified workflow for text to image to video
    openart.ai/workflows/abeatech...
    Tutorial for Batch Generating Text to Image using external text file:
    • ComfyUI: Batch Generat...
    Workflow: ipiv's Morph - img2vid AnimateDiff LCM:
    civitai.com/models/372584?mod...
    Note: See 02:09 of the video for Model folder locations
    AnimateDiff:
    huggingface.co/wangfuyun/Anim...
    VAE:
    huggingface.co/stabilityai/sd...
    AnimateLCM LORA:
    huggingface.co/wangfuyun/Anim...
    Clip Vision Model ViT-H:
    CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors download and rename:
    huggingface.co/h94/IP-Adapter...
    Clip Vision Model ViT-G:
    CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors download and rename -
    huggingface.co/h94/IP-Adapter...
    IPADAPTER MODEL:
    huggingface.co/h94/IP-Adapter...
    Control Net (QRCode):
    huggingface.co/monster-labs/c...
    Motions Animations for AnimateDiff: civitai.com/posts/2011230
    ################
    Music: Bensound.com/royalty-free-music
    License code: LU8J6ZAOXHXNOAI4
  • НаукаНаука

Комментарии • 75

  • @alessandrogiusti1949
    @alessandrogiusti1949 17 дней назад

    After following many tutorial, you are the only one gettin to me the results in a very clear way. Thank you so much!

  • @ted328
    @ted328 25 дней назад +1

    Literally the answer to my prayers, have been looking for exactly this for MONTHS

  • @SylvainSangla
    @SylvainSangla 16 дней назад

    Thanks a lot for sharing this, very precise and complete guide ! 🥰
    Cheers from France !

  • @AlvaroFCelis
    @AlvaroFCelis 15 дней назад

    Thank you so much! Very clear, and organized. Subbed..

  • @MSigh
    @MSigh 21 день назад

    Excellent! 👍👍👍

  • @amunlevy2721
    @amunlevy2721 14 дней назад +2

    Getting these errors that nodes are missing even when installed IP Adapter Plus... missing nodes: IPAdapterBatch and IPAdapterUnifiedLoader

  • @mcqx4
    @mcqx4 27 дней назад +1

    Nice tutorial, thanks!

    • @abeatech
      @abeatech  26 дней назад +1

      Glad it was helpful!

  • @popo-fd3fr
    @popo-fd3fr 13 дней назад

    Thanks man. I just subscribed

  • @velvetjones8634
    @velvetjones8634 28 дней назад

    Very helpful, thanks!

    • @abeatech
      @abeatech  28 дней назад

      Glad it was helpful!

  • @zarone9270
    @zarone9270 23 дня назад

    thx Abe!

  • @TechWithHabbz
    @TechWithHabbz 26 дней назад +1

    You about to blow up bro. Keep it going. Btw, I was subscriber #48 😁

    • @abeatech
      @abeatech  26 дней назад

      Thanks for the sub!

  • @SF8008
    @SF8008 21 день назад

    Amazing! Thanks a lot for this!!!
    btw - which nodes do I need to disable in order to get back to the original flow? (the one that is based only on input images and not on prompts)

  • @MariusBLid
    @MariusBLid 25 дней назад +1

    Great stuff man! Thank you 😀what are your specs btw? I only have 8gb vram

  • @BrianDressel
    @BrianDressel 16 дней назад

    Excellent walkthrough of this, thanks.

  • @rowanwhile
    @rowanwhile 24 дня назад

    Brilliant video. thanks so much for sharing your knowledge.

  • @cabb_
    @cabb_ 24 дня назад

    ipiv did an incredible job with this workflow!. Thanks for the tutorial.

  • @gorkemtekdal
    @gorkemtekdal 28 дней назад +1

    Great video!
    I want to ask that can we use init image for this workflow like we do on Deforum?
    I need the video starts with a specific image on the first frame of the video, then it should changes through the prompts.
    Do you know how does it possible on ComfyUI / AnimateDiff?
    Thank you!

    • @abeatech
      @abeatech  27 дней назад

      I haven't personally used deforum, but it sounds like its the same concept. This workflow uses 4 init images at different points during the 96 frames to guide the animation. The ipadapter and control net nodes do most of the heavy lifting so prompts aren't really needed, but i've used them to fine tune outputs. I'd encourage you to try it out and see if it gives you the results you're looking for.

  • @Ai_Gen_mayyit
    @Ai_Gen_mayyit 4 дня назад

    Error occurred when executing VHS_LoadVideoPath:
    module 'cv2' has no attribute 'VideoCapture'

  • @ImTheMan725
    @ImTheMan725 6 дней назад +1

    Why can't your morph 20/50 pictures?

  • @paluruba
    @paluruba 22 дня назад +2

    Thank you for this video! Any idea what to do when the videos are blurry?

  • @Ai_Gen_mayyit
    @Ai_Gen_mayyit 3 дня назад

    Error occurred when executing VHS_LoadVideoPath:
    module 'cv2' has no attribute 'VideoCapture'
    your video timestep: 04:20

  • @Halfgawd_Halfdevil
    @Halfgawd_Halfdevil 18 дней назад

    Managed to get this running. It does okay but I am not seeing much influence from the control net motion video input. Any way to make that more apparent? Also have notice a Shutterstock overlay near the bottom of the clip. it is translucent but noticeable. kind of ruins everything. anyway, to eliminate that artifact?

  • @wagmi614
    @wagmi614 14 дней назад

    can could one add some kind of ip adaptar to add your own face to transform?

  • @TheNexusRealm
    @TheNexusRealm 12 дней назад

    cool, how long did it take you?

  • @ComfyCott
    @ComfyCott 12 дней назад

    Dude I loved this video! You explain things very well and I love how you explain in detail as you build out strings of nodes! subbed!

  • @aslgg8114
    @aslgg8114 27 дней назад

    What should I do to make the reference image persistent

  • @kwondiddy
    @kwondiddy 5 дней назад

    I'm getting errors when trying to run... a few items that say "value not in list: ckpt_name:" "value not in list: lora_name" and "value not in list: vae_name:"
    I'm certain I put all the downloads in the correct folders and name everything appropriately.... Any thoughts?

  • @pro_rock1910
    @pro_rock1910 22 дня назад

    ❤‍🔥❤‍🔥❤‍🔥

  • @TinyLLMDemos
    @TinyLLMDemos 3 дня назад

    where do i get your input images

  • @efastcruelx7880
    @efastcruelx7880 4 дня назад

    Why my generated animation very different from the reference images

  • @MichaelL-mq4uw
    @MichaelL-mq4uw 25 дней назад

    why do you need controlnet at all? can it be skipped and morph without any mask?

  • @saundersnp
    @saundersnp 22 дня назад

    I've encountered this error : Error occurred when executing RIFE VFI:
    Tensor type unknown to einops

  • @tetianaf5172
    @tetianaf5172 День назад

    Hi! I have this error all the time: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm). Though I use 1.5 checkpoint. Please help

  • @CoqueTornado
    @CoqueTornado 27 дней назад +1

    great tutorial, I am wondering... how many vram does this setup need?

    • @abeatech
      @abeatech  27 дней назад +1

      i've heard of people running this successfully on as little as 8gb VRAM, but you'll probably need to turn of the frame interpolation. you can also try running this on the cloud at openart (but your checkpoint options might be limited): openart.ai/workflows/abeatech/tutorial-morpheus---morphing-videos-using-text-or-images-txt2img2vid/fOrrmsUtKEcBfopPrMXi

    • @CoqueTornado
      @CoqueTornado 24 дня назад

      @@abeatech thank you!! will try the two suggestions! congrats for the channel!

  • @brockpenner1
    @brockpenner1 21 день назад

    ComfyUI threw an error in the VRAM Debug node of Frame Interpolation:
    Error occurred when executing VRAM_Debug:
    VRAM_Debug.VRAMdebug() got an unexpected keyword argument 'image_passthrough'
    Any help would be appreciated!

  • @user-vm1ul3ck6f
    @user-vm1ul3ck6f 27 дней назад +2

    Help! I encountered this error while running it
    Error occurred when executing IPAdapterUnifiedLoader:
    Module 'comfy. model_base' has no attribute 'SDXL_instructpix2pix'

    • @abeatech
      @abeatech  27 дней назад

      Sounds like it could be a couple of things:
      a) you might be trying to use an SDXL checkpoint - in which case try using a SD1.5. The AnimateDiff model in the workflow only works with SD1.5
      or
      b) an issue with your IPAdapter node. you can yry making sure the ipadapter model is downloaded and in the right folder, or reinstalling the ComfyUI_IPAdapter_plus node (delete the custom node folder and reinstall from manager or github)

  • @TinyLLMDemos
    @TinyLLMDemos 3 дня назад

    how do i kick it off?

  • @yakiryyy
    @yakiryyy 28 дней назад

    Hey! I've managed to get this working but I was under the impression this workflow will animate between the given reference images.
    The results I get are pretty different from the reference images.
    Am I wrong in my assumption?

    • @abeatech
      @abeatech  28 дней назад

      You're right - it uses the reference images (4 frames vs 96 total frames) as a starting point and generates additional frames, but the results should still be in the same ball park. if you're getting drastically different results, it might be a mix of your subject + SD1.5 model. I've had the best results by using a similar type of model (photograph, realism, anime, etc) for both the image generation and the animation

    • @efastcruelx7880
      @efastcruelx7880 4 дня назад

      @@abeatech Is there any way to make the result more like reference images

  • @Adrianvideoedits
    @Adrianvideoedits 3 дня назад

    you didnt explain most important part, which is how to run same batch with and without upscale. It generates new batches everytime you queue prompt so preview batch is waste of time. I like the idea though.

  • @cohlsendk
    @cohlsendk 23 дня назад

    Is there an way to increase frames/batch size for FadeMask?? Everything over 96 is messing up the Facemask -.-''

  • @devoiddesign
    @devoiddesign 23 дня назад

    Hi! any suggestion for missing IPAdapter? I am confused because i didn't get an error to install or update and I have all of the IPAdapter nodes installed... the process stopped on the "IPAdapter Unified Loader" node.
    !!! Exception during processing!!! IPAdapter model not found.
    Traceback (most recent call last):
    File "/workspace/ComfyUI/execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
    File "/workspace/ComfyUI/execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
    File "/workspace/ComfyUI/execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
    File "/workspace/ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus/IPAdapterPlus.py", line 453, in load_models
    raise Exception("IPAdapter model not found.")
    Exception: IPAdapter model not found.

    • @tilkitilkitam
      @tilkitilkitam 18 дней назад

      same problem

    • @tilkitilkitam
      @tilkitilkitam 18 дней назад +1

      ip-adapter_sd15_vit-G.safetensors - install this from the manager

    • @devoiddesign
      @devoiddesign 18 дней назад

      @@tilkitilkitam Thank you for responding.
      I already had the model installed but it was not seeing it. I ended up restarting Comfy completely after I updated everything from the manager instead of only doing a hard refresh and that fixed it.

  • @rooqueen6259
    @rooqueen6259 12 дней назад

    Guys who have come across the fact that the loading 2 new models stops at 0% or I also had an example - the loading 3 new models is 9% and no longer continues. What is the problem? :c

  • @WalkerW2O
    @WalkerW2O 4 дня назад

    Hi Abe aTech, very informative and i like your work very much.

  • @axxslr8862
    @axxslr8862 23 дня назад

    in my comfy UI there is no manager option ...... help please

  • @creed4788
    @creed4788 22 дня назад

    Vram required?

    • @Adrianvideoedits
      @Adrianvideoedits 3 дня назад

      16gb for upscaled

    • @creed4788
      @creed4788 2 дня назад

      @@Adrianvideoedits Could you make the videos first and then close and load the upscaler to improve the quality or does it have to be all together and it can't be done in 2 different workflows?

  • @ErysonRodriguez
    @ErysonRodriguez 26 дней назад

    noob question: why my results more different from my output

    • @ErysonRodriguez
      @ErysonRodriguez 26 дней назад

      i mean, what images i loaded have different output instead transitioning

    • @abeatech
      @abeatech  25 дней назад

      The results will not exactly be the same, but they should still be in the same ball park. if you're getting drastically different results, it might be a mix of your subject + SD1.5 model. I've had the best results by using a similar type of model (photograph, realism, anime, etc) for both the image generation and the animation. Also worth double checking that you have the VAE and LCM lora selected in the settings module

  • @3djramiclone
    @3djramiclone 14 дней назад

    This is not for beginners, put that on the description mate

    • @kaikaikikit
      @kaikaikikit 7 дней назад

      what are you are crying about...go find a beginner class when it's too hard to understand...

  • @user-vm1ul3ck6f
    @user-vm1ul3ck6f 27 дней назад +1

    Help! I encountered this error while running it

    • @user-vm1ul3ck6f
      @user-vm1ul3ck6f 27 дней назад +1

      Error occurred when executing IPAdapterUnifiedLoader :
      module 'comfy.model base’ has no attribute 'SDXl instructpix2pix

    • @abeatech
      @abeatech  26 дней назад

      Sounds like it could be a couple of things:
      a) you might be trying to use an SDXL checkpoint - in which case try using a SD1.5. The AnimateDiff model in the workflow only works with SD1.5
      or
      b) an issue with your IPAdapter node. you can yry making sure the ipadapter model is downloaded and in the right folder, or reinstalling the ComfyUI_IPAdapter_plus node (delete the custom node folder and reinstall from manager or github)

    • @Halfgawd_Halfdevil
      @Halfgawd_Halfdevil 19 дней назад

      @@abeatech it say s in the note to install it in the clip vision folder. but that is not it as none of the preloaded models are there and the new one installed there does not appear in the dropdown selector. so if it is not that folder then where are you supposed to install it? if the node is bad why is it used in the work flow in the first place? shouldn't it just have the ipadapter plus node?