UltimateLCM AnimateDiff Vid2Vid Workflow! // Civitai AI Animation Tutorial

Поделиться
HTML-код
  • Опубликовано: 11 ноя 2024

Комментарии • 63

  • @rmeta3391
    @rmeta3391 Месяц назад +1

    Nice, so many workflows, so little time. You're killing it on Thursdays! Much thanks.

  • @dadekennedy9712
    @dadekennedy9712 Месяц назад

    Aye!! Thanks for the great work!

  • @PeteStueve
    @PeteStueve Месяц назад

    Super good match civit and you , I actually like civit now a lot more . I'll have to check out a twitch one day

    • @civitai
      @civitai  Месяц назад

      Wow, what a compliment! Thanks so much! Come join the twitch streams, we always have lots of fun on there and the chat is super friendly and knowledgable!

  • @dmsa2
    @dmsa2 Месяц назад

    I'm just starting out in this process, I ask you, is there somewhere where I can download a folder with all the modules and loras and controls already pre-installed, like a portable version? I tried for 5 hours to get the installations to work and nothing worked. Thanks.
    Eu sou iniciando nesse processo, te pergunto, tem algum lugar onde eu posso baixar uma pasta com todos os modulos e loras e controles já pré-instalados, tipo uma versão portable ? Eu tentei por 5h fazer as instalações funcionarem e nada deu certo. Obrigado.

  • @nicolasmarnic399
    @nicolasmarnic399 16 дней назад

    Hello. Is it strictly necessary to put a mask in this workflow?

  • @AshChoudhary
    @AshChoudhary Месяц назад +1

    hi, I am facing a strange error, the image node in VideoCombine is not taking any inputs from any image node. What could be the issue? Great workflow btw

    • @Zennu9
      @Zennu9 Месяц назад

      I'm facing the same issue after updating ComfyUI

    • @AshChoudhary
      @AshChoudhary 27 дней назад

      @@Zennu9 i got no permanent change but cloning the VideoCombine node worked for me. just had to reconnect the image and filename prefix nodes

    • @jittthooce
      @jittthooce 14 дней назад

      @@AshChoudhary how did you fix the filename prefix issue?
      Failed to validate prompt for output 252:
      * (prompt):
      - Required input is missing: filename_prefix
      The only input options I see are: images, audio, meta_batch, and vae.

    • @jittthooce
      @jittthooce 14 дней назад

      nvm, fixed it

  • @Cybernaut7777
    @Cybernaut7777 Месяц назад

    Thx for your great work! I'm having an issue: background doesn't seem to be taking too much inspiration from the "IPAdapter (Background)" section and instead (mostly) copies the original video despite the SAM mask. What am I doing wrong? thx

    • @civitai
      @civitai  Месяц назад

      You're going to want to mess around with the controlnet combos. The most creative controlnet combo is depth + openpose + controlgif. I'd keep controlgif at 0.4 and not go above that though. Depending on what your original input video is, you will also want to try to find IP images that have at least some of the same context for the BG. At the end of the day, it's going to trace what is already in the source video.

  • @glillemon
    @glillemon 26 дней назад

    this workflow is sick, does anybody have a suggestion on how to have the background stay unchanged, but have some options on the mask blurriness to allow say a mask to blend the stable diffusion/ animfiff into the live action, I think thats what QR monster code is for, but if any suggestions I would appreciate it!

  • @ceaselessvibing5997
    @ceaselessvibing5997 Месяц назад +1

    @Civitai, would this work for Pony Based models, (I tried and suffered a bit)
    I had some funky issues trying to generate just one image/frame and it wasnt producing the images i was expecting from the image i provided.
    without going into too many workflow shenanigans... what are the current model limitations for doing this kind of thing?
    i.e is it only SD1.5, etc? without blowing up my brain too xD?

    • @civitai
      @civitai  Месяц назад +2

      This is only compatible with 1.5 and 1.5 LCM models. I’d recommend using the LCM 1.5 LoRA at a strength of 1.0 for non LCM models and a strength of 0.18 for LCM models 🙏🏽🫶🏽

  • @Zany-g3h
    @Zany-g3h Месяц назад

    I can't load your workflow inside comfy - it gives me error message - Reactorfaceswap , i've tried to do a fix but getting this message - ReActor Node for ComfyUI fix failed: This action is not allowed with this security level configuration

    • @LuckyGuyAE
      @LuckyGuyAE 28 дней назад

      I have same issue bro, I don't know how to fix it too

  • @jittthooce
    @jittthooce Месяц назад

    I can get 4-5 sec outputs without any issues on a 4090 and 30 gigs of RAM on Runpod. However, when I tried to do a 15 sec video, it straight up kills the process right after the controlnet processing. Any tips to get it work on slightly longer videos? I mean isn't that enough computing power to generate 10-15 second animations? Thanks for the updated workflow btw. You are doing a great service by putting these things together for people who aren't able to sit and figure these out on their own.

    • @civitai
      @civitai  Месяц назад

      I am able to do up to 1000 frames at a time with my 4090, but i'm doing it locally. I'm not entirely sure, but that sounds like it could be on runpod's side. Sounds like that key difference is local vs cloud.

  • @lucifer9814
    @lucifer9814 Месяц назад +1

    I always get an error with the reactor node, it basically says " Error loading reactor node " regardless of installing it or even fixing it

    • @civitai
      @civitai  Месяц назад +2

      In that case, i'd just delete it. I can't remember the last time I sued it and I just have it there as a "just in case". But tbh, it's not worth the wrestling probably

    • @lucifer9814
      @lucifer9814 Месяц назад

      @@civitai So this whole workflow would work even without the reactor node is it ?

  • @musyc1009
    @musyc1009 Месяц назад

    great job as always bro, quick question, is there a reason why the output video is always 1 second short than the input video ? i didn't skip any frames nor put a frame cap on it

    • @civitai
      @civitai  Месяц назад +1

      hmmm, I never have that problem so i'm not entirely sure. I'm sorry :/

    • @musyc1009
      @musyc1009 Месяц назад

      @@civitai yea i googled and couldn't find anything about that , it's just weird and still does that for some reason🤔 it's always 1 second shorter

  • @gardentv7833
    @gardentv7833 Месяц назад

    Apply ControlNet Stack
    'NoneType' object has no attribute 'copy'
    I got this error message, any clue sir?

    • @civitai
      @civitai  Месяц назад

      ControlNets in the proper folders? Hard to tell without seeing what you got going on

  • @zuzuumaibam
    @zuzuumaibam Месяц назад

    ByPass button is not there i even update the comfyui, is there anything I am missing

    • @civitai
      @civitai  Месяц назад

      There is a little button in the top right corner of each group. Just click it :)

    • @TheTruthIsGonnaHurt
      @TheTruthIsGonnaHurt 26 дней назад +2

      Right Click anywhere on screen to bring up menu prompt. Scroll to RGThree Comfy. Click on Settings.
      Scroll down to (Groups) Show Fast Toggles in Group Headers, Select Toggle: Bypass and Show: Always.
      Once you do that, then the ByPass Button will appear in top right corner of each group.

  • @Caret-ws1wo
    @Caret-ws1wo Месяц назад

    Is there a way to only diffuse certain parts of the mask? i.e only generate on the white and leave the background black?

    • @civitai
      @civitai  Месяц назад +1

      After you cut out your character, try using a solid black frame in your background ipadapter and prompting for a black background :)

    • @Caret-ws1wo
      @Caret-ws1wo Месяц назад

      @@civitai Perfect, thanks!

  • @amersenlu875
    @amersenlu875 Месяц назад

    has anybody gotten "sampler(efficient) "noneType" object has no attribute "shape"? I downloaded controlnets, checkpoints and loras like in the video and I get this error. Help

    • @amersenlu875
      @amersenlu875 Месяц назад

      ok, I figured the problem is with the linear contorlnet

    • @amersenlu875
      @amersenlu875 Месяц назад

      made it work, but it doesn't seem to be taking the background I chose

  • @pookienumnums
    @pookienumnums Месяц назад

    Yuh! (first too! on my bday as well! holla)

  • @theflowbeta1604
    @theflowbeta1604 Месяц назад

    Hello, for a few days now I have been claiming the daily buzz restart, but when I claim, the daily buzz never adds up. Before all this everything was perfect, but now it is not anymore. I have seen that the same thing happens to other people, can you fix it?

    • @civitai
      @civitai  Месяц назад

      Feel free to reach out to us via our support email or in discord 🙏🏽

  • @Scherzify
    @Scherzify 6 дней назад

    Please review your workflow seems to be broken nodes links are broken.

  • @Lucy-z5d
    @Lucy-z5d Месяц назад +1

    Great workflow. However, it looks like without a 4080 or 4090 it will take forever just to get a 5sec video output.

    • @civitai
      @civitai  Месяц назад

      Unfortunately it is not low VRAM friendly. This workflow will take 12-15gb at least to run because of the mask and the ip adapters

    • @Lucy-z5d
      @Lucy-z5d Месяц назад

      @@civitai thank you for reminding me. is there another version for 8gb?

  • @fr0zen1isshadowbanned99
    @fr0zen1isshadowbanned99 26 дней назад

    Comfy is bad as always. Videos don't show the selected parts, frames, framerates...
    Nodes are not properly connected. FaceSwap as always is broken. Videos can't be created with the used Node. Videos can't be played in any other Format than webm.
    How long am I trying to use this Dumpster-fire of a Ui now? 2 years? 2 years and still very similar problems to when I first had the displeasure of trying it.
    And btw, I switched PC in the meantime, so that is not the problem!
    You tried and I thank you for that :)
    One time it even worked very good until the "Updates" arrived ^^ And it seemed to work today too,
    just after doing lots of disconnecting and swapping to normal Nodes, and without Video Settings working.
    One day they will release Sora and maybe at that point there will be a good Ui ^^ Maybe... but likely not xD

    • @nirdeshshrestha9056
      @nirdeshshrestha9056 20 дней назад

      can you send your fixed workflow please i am having a headache

    • @fr0zen1isshadowbanned99
      @fr0zen1isshadowbanned99 20 дней назад

      @@nirdeshshrestha9056 Have you received the Link? I don't know how YT will handle putting Links in the Comments.

    • @fr0zen1isshadowbanned99
      @fr0zen1isshadowbanned99 20 дней назад

      @@nirdeshshrestha9056 But don't expect too much :)
      I had to do a quick fix on the Workflow and about half an Hour was spent on Linking it somehow to you ^^
      If you got Questions, ask away... and don't forget that you need all the ControlNet Models(btw: play around with them. Try switching one to Open Pose, that's better for me mostly) and the IP Adapter + Pytorch Model.
      jboogx has a Setup Guide somewhere. But I don't know where.

    • @fr0zen1isshadowbanned99
      @fr0zen1isshadowbanned99 20 дней назад

      @@nirdeshshrestha9056 Ok... I tried Linking it here but NO CHANCE! Not even giving you my Mail. AND THIS MESSAGE WAS DELETED TOO!!!!
      So you have to go to the Link in the Video Description and look there for my Comment under his Workflow.
      My Name there is FrozenGT.
      I hope that worked now xD... I have spent over an Hour on this now D:

  • @OptimBro
    @OptimBro Месяц назад

    " most important part of what we do....CREATING!" 🥹🥹

  • @jonrich9675
    @jonrich9675 Месяц назад +1

    Still 100% lost. I use flux and this 100% doesn't help me out at all.
    Like I just started comfyui like 10 days ago and already know that you MUST use only certain models with certain IPadapters with certain unets and etc.
    I just need someone to show me ALL FLUX. This is SD.1.5. but no body in the World has done this with just FLUX yet. Kinda annyoing i now have to swap everything to lame SD models ugh.

    • @civitai
      @civitai  Месяц назад

      Flux does not have a working motion model yet so there is no way to do clean vis2vid style transfers with it just yet. We are sure there will be one, but it has not been released yet. This workflow is only for SD1.5. We also have a tutorial from a few weeks back with Inner Reflections showing how to use his SDXL workflow.

    • @jonrich9675
      @jonrich9675 Месяц назад

      @@civitai can..can i just give u buzz so you can train one?

    • @Zerod-rn3ye
      @Zerod-rn3ye Месяц назад

      @@jonrich9675 Unfortunately, it does not work that way. The issue is that Flux is too new (a few months old) while SD is very established for several years now and thus has way more tools and knowledge established among the community, researchers, and businesses. Civitai does not make any of this tech, they merely act as a host of models. Models like Flux cost hundreds of millions to produce. As for merging/training checkpoints, as seen with SD, no one has been able to figure out how to do with with Flux yet (currently, checkpoint merges with Flux are something else entirely in that they're highly prone to degraded visuals, prompt adherence, major issues with Lora, and frequently crash for most users... thus no major checkpoint has gained popularity over base model yet). Rather, the focus is on Lora's with Flux, at the moment at least. It isn't even known, and is heavily doubted, we'll ever even seen proper checkpoint releases beyond base Flux considering how it works, unlike SD.
      Now, the ones who do make these models are individual companies spending hundreds of millions to develop these models while the tools like Controlnet, etc. are developed by researchers and sometimes very capable members of the community (usually extending on researcher's shared open source results to then implement it in ComfyUI, etc.). In short, it will take more time for Flux support to grow.
      However, you can still generate images in Flux and then transfer them to SD for certain processes to refine or generate additional content off this just like you can do with real world photos via img2img, etc.

  • @alexhowe4775
    @alexhowe4775 Месяц назад +2

    far too much yapping, but good information nonetheless.

  • @lofigamervibes
    @lofigamervibes Месяц назад

    Oh my god, how did you know I like Waifu's?? That's crazy, you're so right, though. 😇

    • @civitai
      @civitai  Месяц назад +1

      Lucky guess :P

  • @Otchengazoom
    @Otchengazoom Месяц назад

    Yes, we like wifus 🤗😍🤘

    • @civitai
      @civitai  Месяц назад

      This we do, my friend. This we do. Go make a cool one and share it!