Ultimate Guide to Seamless AI Animations (Even on Low-End PCs!)
HTML-код
- Опубликовано: 28 май 2024
- 🚀 Getting Started with ComfyUI and Animate Diff Evolve! 🎨
In this comprehensive guide, we'll walk you through the easiest installation process of ComfyUI and Animate Diff Evolve. Even if your computer doesn't meet the system requirements, we've got you covered with a handy workaround. P.s Was going through an allergic reaction to coffee whilst filming this...
Animated videos are a ton of fun, and we'll focus on creating amazing animations. Whether you're a seasoned animator or just starting, this guide is designed to make you feel right at home with ComfyUI.🎥
💡 Tutorial Blog Post Coming Soon: www.promptmuse.com
📱 Connect with me on Social Media: @promptmuse
🔥 What's in Store for You? 🔥 Timestamped
✅ 0:00 Step-by-step installation instructions for ComfyUI and Animate Diff Evolve.
✅ 1:45 System Requirements & Tips for overcoming system requirements challenges.
✅ 3:31 Software dependencies
✅ 4:05 Installing ComfyUI & comfyUIManager
✅ 8:25 Basic Workflow Overview
✅ 11:35 Installing Animate Diff ) Kosinkadink
✅ 17:18 Text To Video
✅ 18:35 Add A Lora to Add more details
✅ 20:45 Video To Video Animations Workflow
✅ 26:17 Prompt Travel/ Batch Prompt Schedule
✅ 26:29 Cannot be Unseen, once viewed
🔗 Explore Useful Tutorial Links:
Shadow.Tech PC in the Cloud: shadow.tech/ ( My Code is referral 8745831)
Git Download: git-scm.com/downloads
FFmpeg Download: ffmpeg.org/download.html
Install ComfyUI: github.com/comfyanonymous/Com...
Cardos Anime CivitAI Checkpoint Model: civitai.com/models/25399/card...
AnimateDiff Evolved For Template: github.com/Kosinkadink/ComfyU...
VAE Download: huggingface.co/stabilityai/sd...
InnerReflections Workflow Video To Video: civitai.com/api/download/atta...
Useful Workflow Links:
civitai.com/articles/2601/gui...
github.com/Kosinkadink/ComfyU...
By the end of this guide, you'll be ready to unleash your creativity and create epic animations of your own using ComfyUI and Animate Diff 🎉
Don't forget to like, comment, and subscribe....pleassssseeee I beg of you
#aianimation #comfyui #animatediff #aianime #stablediffusion #aianimation - Хобби
FAQ For the Pin!
Timestamps in description :)
Everything runs... but I do get these errors? Do they matter?
Efficiency Nodes Warning: Failed to import python package 'simpleeval'; related nodes disabled.
WAS Node Suite Error: Unable to load conf file at `/content/drive/MyDrive/AI/ComfyUI/custom_nodes/was-node-suite-comfyui/was_suite_config.json`. Using internal config template.
WAS Node Suite: OpenCV Python FFMPEG support is enabled
WAS Node Suite Warning: `ffmpeg_bin_path` is not set in `/content/drive/MyDrive/AI/ComfyUI/custom_nodes/was-node-suite-comfyui/was_suite_config.json` config file. Will attempt to use system ffmpeg binaries if available.
WAS Node Suite Error: Unable to load conf file at `/content/drive/MyDrive/AI/ComfyUI/custom_nodes/was-node-suite-comfyui/was_suite_config.json`. Using internal config template.
WAS Node Suite: Finished. Loaded 197 nodes successfully.
2023-11-02 01:52:45.004059: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2023-11-02 01:52:45.004117: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2023-11-02 01:52:45.004167: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2023-11-02 01:52:46.305429: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
Great great tut! Thanks! Will it work with SDXL? Or only after we get SDXL motion models?
Error occurred when executing ControlNetLoaderAdvanced:
empty() received an invalid combination of arguments - got (tuple, dtype=bool, device=NoneType), but expected one of:
* (tuple of ints size, *, tuple of names names, torch.memory_format memory_format, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad)
* (tuple of ints size, *, torch.memory_format memory_format, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad)
what is the problem
Great, what about making less anime-ish imagery and make it more 3D realistic?
Best channel for anything related to animation, AI-powered artistry, and well-founded insights! Whenever there's something new from you, we know it's time to delve in and learn some new techniques. Thanks a lot for everything we've learned from you so far.
Wow what an uplifting comment. Thank you so much. I absolutely love learning and helping others on the way, if I can - my two passions in life. Made my day, thank you for your support and kind words.
Great job as always. Your deep knowledge of so many pipelines is so praise worthy. Once again I'm totally impressed. I've been reluctant to load up my computer with models, but the cloud alternative right out of the box directs me to a way I hadn't considered. I'm a fan!
Thank you so much!
first video I see on your channel and I already subscribed. The amount of useful information is just at another level. Thank you for sharing
Fabulous tutorial! You have fun energy. I’m looking forward to trying the video to video workflow. Thanks so much.
Absolutely charming videos from you on AI + 3D animation workflows. Liked, subscribed, bookmarked and eagerly awaiting more content. Cheers.
this was incredibly informative! wow thank you so much, im really looking forward to seeing what I'm able to create!
Wow, that was a really nice animation. and really nice tutorial and editing. The zoom ins are super helpful!
Hello my friend !
@@promptmuse hello 🤗
Thanks for another great video, PM. I was dabbling with this already so your vid was perfectly timed for me, i learned some new things and it was very helpful. Keep up the good work!
Awesome, it’s exciting to learn. Don’t get me wrong, still love A1111 UI, but this seems to produce faster results - we all love a bit of that haha. Good luck with your project !
This 26min took me some hours lol I was pausing and noting every step, and finally I could start my journey in ComfyUI, thank you so much!! 🙏🙏
Amazing !!! Thank you for letting me know, and glad I could help 🫶
Hermoso video, como siempre perfecto, felicidades ❤
great video. Thanks for posting
Really good tutorial !
Super helpful. Thanks for the tutorial.
You're welcome!
Excellent tutorial video, many thanks!
Thank you so much 🫶🔥
Thank you very much for the lesson
You're a very thorough explainer.
Your video really took away the fear of the unknown after that first intimidating impression, that I get with every nodebased tool.
I’m so glad to hear that, and that’s why I love making this content!
A perfect, succinct, tour de force video!!!
Thank you !
Fantastic and useful information, thanks for your vid.
Thanks so much !! I appreciate your feedback 🫶
Thank. Cleared all doubts and errors which i got.
Thank you so much ❤️🔥 I’m glad it all worked out.
Thanks for this great video. Liked, subscribed of course. I would love to see you taking a step further up by doing a tutorial on creating a 3 mins or longer story using the batch prompt schedule. This would be a challenge as we are so limited with the motions.
Thank you for this!!!! I love your videos and you're my first AI crush! :)
Though I am still learning the basics of SD & A1111 for still images, I found this to be a brilliant tutorial for when I get advanced enough to try ComfyUI and try animations. Thank you 🙏🏽 so much for the detailed and easy to follow tutorial.
Awesome to hear, I'm going to link some additional resources that you may find helpful for you comfyui journey on my website today as well :) www.promptmuse.com
Finally decided to try, you're a true Goddess. tsvm
again a great video, thank you so much
No, thank you for watching !
Great video! Thanks
Work of the Gods!
I don’t normally press the alert button. I did for this one!
I hope it didn’t disappoint. Quite a long one, but meant as more of a reference if anyone gets stuck :)
you just earned my sub. big time.
Thank you very much for this video quite helpful and informative. Thanks
Thank you for the kind comment, I’m glad it helped :) 🙏
Great Video👍
This UI Sort of reminds me of Blueprints in unreal engine or even blenders node system.
Thank you ! It really is like blue prints and the material editors. I think this will be user friendly for 3d artists and devs. However it may be a bit intimidating to those who are new to node base layout. I always advise people work with what they are comfortable with as everything here can be done on automatic1111 :) which is still an amazing UI in itself !
Thanks so much, amazing content as usual! Really appreciate the step-by-step details 🙏. I'm wondering - and I couldn't find the answer anywhere - is there a node that allows you to start the animation with an image? I saw "text to video" and "video to video" but is there a workflow that starts with an image (like img2img), then fires through the prompt/checkpoint/video generator to create the animation based on the image, perhaps biased by denoising?
Thank You.....Cool
Thank you so much relly good video
Super helpful, tahnk you very much! Your videos are always rich and abundant for information. Trying to make my little AI- video-tale. Is there any idea how to start the animation with an image, image-to-video?
You looking cool 💥
If you already have A1111 installed, you can take advantage of a comfyui setting (config file) that points to the models of that install. This will save a ton of hdd space. Otherwise you can end up doubling up on all of those and it really adds up fast.
do you have a video reference on doing dat? i already have locally A1111
@@Nekotico Edit extra_model_paths.yaml. change the 'base_path' value.
Mine looks like this: 'base_path: E:\AI\sd.webui\webui'
@@Nocare89 ty for sharing!
$40 a month for a 1080!! You can buy one second hand for under £200.
M’yeah. But also many people will need to shell out to build the rest of the machine. Can get pricey. Depends on how long you want this for and how much change is in your pocket.
IMHO a 12G 3060 makes more sense, 50% more memory and support for the latest version of CUDA.
I bought one for 220€ ~ 190£ but prices are rising.
Yeah that is a lot of money for 1080
It’s not just the card you’re basically renting a virtual PC with storage memory network connection GPU CPU and you don’t have to worry about it interfering or using space on whatever computing device you use most of the time, and you don’t pay for electricity since it seems to be a monthly charge so you can run it all day at all night. So in that case picking a lower cost slower device might be perfectly fine since you can batch up lots of work and just go to sleep. Meanwhile every second you sleep with your 3060 it’s depreciating. And it’s either idle or chewing up electricity that you pay for, same with all the other components. Just the terabyte of storage connected to a full PC is quite useful. And if you pay by the month and don’t need it any more just stop paying and they will delete it and you can do something else.
@@tysond1500 I like to build and train my AI models on Google Collab. Doesn't cost a penny unless you start building monster models!
Awesome and really helpfull Tutorial!! Didn´t you say, that there is a glossary on your Website? I Couldn´t find it. And can you tell me, what all those brackets and ":1.2" in the animatediffs are usefull for? (sorry...english is not my motherlanguage 🙂 )
Clicked on this because of the "even on low-end PCs" bit.
my PC is really low-end, 4GB card and ComfyUi runs fine...well slow, but fine. AnimateDiff is also fine if you lower the resolution and then upscale afterwards. I made a 1080p music video on my Potato PC. ComfyUI is fine!
Amazing ! That’s really good for others to know. Would love to see the music video you created. Did you have to reduce the sample by chance? 🔥
I downloaded a workflow similar to yours and just changed the resolution to 384x384, nothing else and it didn't run out of memory. Music video is the latest video on my channel, can't post a link, RUclips don't like me doing that. I use ConfyUI to make click-bait thumbnails too 😁
@@Satscape Awesome, just subscribed 🔥✊
can you share the workflow link please ?@@Satscape
rubbing off on you at the end!
Would you happen the know what the differences are between Load Video (Path) and Load Video (Upload) nodes? Thanks for the video!
I tried the text to video and it worked great except for the fact that the video was blurry. What do I need to adjust on the k sampler to try and get rid of the blur? Or is there something
else I need to adjust. Thanks.
Thanks this is an awesome tutorial and got me up and running really quickly! I did get an issue after installing the motion model. When I try to open the UI I get the error:
No module named 'comfy.options
It looks like it might be something to do with incompatible version of python? I have the correct file structure still, but no options folder.
Let me know if this is something you've run in to before, thanks!
It was so helpful, and it's 😊great to know we can use a fake computer remotely. That is so helpful! Thanks for your knowledge and sharing.
You are welcome, you can use it through an iPad or phone as well, just log in through your shadow tech browser and it there. Love it when tech becomes universal !!!
@promptmuse That's so good to know I was saving for a better computer for warp but will definitely try this instead. In the meantime, thanks so much. I messaged you on Instagram BTW x
Thank you very much for sharing your knowledge and great tutorials with us.😍
I purchased the same cloud Pc from shadowPC (Nvidia Quadro P5000) but it takes so long (2 hours) to run the same process with the same settings and a similar video (00:02 seconds) in comfyui and animatediff. the GPU cupa is running at it full power.
Could you give an advice please. 😍
Great content, thank you.
Next time I'll wear sunglasses, cos Bright Lights :)
Got a new light, as that one was crazy !! 🤣
which controlnet thing should I download if I wanted to use sdxl checkpoints to animate?
Hi, what about image to video in comfy with AnimateDiff. I create my images in another image generator, so I don't need the text to image part. Have you a workflow for this? or maybe another videoThx:-)
Hi. Can you use upscale node for downsizing video ?
Very nice, thy angelic muse, anywhere we can get the jsons from?
Hello, bottom of the description. Just added. I’ll add some more as well and some links to some more templates :)
Thanks for the tut. I still need to install ComfyUI and migrate from A1111.
Thank you for the reply! If you have space I’d keep both, automatic1111 is still a great UI and can do everything in this video. It’s quite like marmite selecting a UI that you feel most comfortable with and which is most efficient for the task.
Why did you decide to use animatediff instead of deforum? Was there any specific reason?
Very nice tutorial but im getting the error " "upsample_nearest2d_out_frame" not implemented for 'BFloat16' " When it comes to video-combine. Any suggestions? Tried googling but no one else seem to have the same problem. I'm using vlads standalone. Fully updated.
I started using Stable Diffusion about a week ago, on Automatic1111 and Vladmantic, but decided to switch to ComfyUI due to recurring issues with the other WebUIs (I've spent more time troubleshooting than actually creating...) I've been been watching a lot of videos about ComfyUI and yours has been the most concise, with a lot of clear, easy to understand information in a short time. Highly appreciated!
I have a question: Do you know how to "switch" the character in a video2video workflow? For example, I'd like to use a video, openpose, but have a completely different character in the final video.
Would this be done through the use of LoRa? Thank you!
Yes! You are correct you can you a Lora, also prompting will help you. I.e “beautiful Angelina Jolie, Lara Croft, female, ” will get you that result. Loras will also help with consitancy. For the overal look selects a model(checkpoint) will drive the overall look, e.g Pixar checkpoint will make Angelina Jolie in a Pixar style. Hope that helps :)!
Is there any way to force the head to stay static and only have expressions limited to mouth or eyes?
I don't have Manager as a menu option how do I get it? This has stopped my projected
i have 6GB vram laptop, and tried , 1st run failed bc of CUDA memory outage, fixed with lower resolution from 512x896 to 640x360 and it works, time to render is 40 min for a 12s video loaded . hope this helps for this VIDEO Tutorial, as LOW GPU WORKFLOW :)
Super new to this, does t his mean iff I use SDXL 1.0 it LoRA it won't work?
How would you do image to vid ?
Pls can u make a tutorial about nightshade?
Oooh yeah, that’s a very interesting topic. Would love to see everyone’s opinion on it.
I need help, I have installed everything, but when trying to make a video 2 video. I get the following error:
Error occurred when executing KSampler:
'ControlNetAdvanced' object has no attribute 'model_sampling_current'
Mine says When loading the graph, the following node types were not found:
ADE_AnimateDiffLoaderWithContext
Fizznodes doesn't seem to come up. Not sure if that's deprecated now
Hey, How can i install ffmpeg? I cannot seem to find in the video. Thank you!
Can you preview one frame, to see what the prompt does, before the ksampler takes forever rendering the whole animation?
AHHHHA! white balance!
Hi., Your prompts doesn't contains keyframes, how obtain animation without kf?
Hi @Prompt Muse, I have sub to your channel and followed your tutorial, It works to run but generating the image is very slow, I'm using Lenovo Y70 with 16 RAM GeForce GTX 860 (very old laptop) I want to ask if this is because my laptop or did I miss anything that cause my generating to be slow? It looks like using Comfy UI conveniently I need to upgrade my laptop? Although I still can do video editing with Adobe Premiere decently, but using Comfy UI it's very slow, I'd appreciate if you could me me direction and solution, any thoughts? Thank you
Hello friend ! First thing you can do is reduce sample steps in the k sampler. If that doesn’t work another way to speed up generations is to use a LCM sampler node. There are lots of workflows that can be found with a quick google search. I hope that helps. I think i’ll make a quick video about this 👍
So this is not because my laptop is old?I have been googling to find solutions too, If you can create a video for this, it will be great, I believe lots of people are having this issue too, thanks for your kind reply@@promptmuse Will be waiting for more of your channel
@@TheAgeofAI_film the newer the better as with anything
anyone get this: Error occurred when executing ADE_AnimateDiffLoaderWithContext ... just running kosinkadinks animated prompt?
i do every thing up to 8.09 but when i clik on file the window oped does not do nothing :(
Perfect video.
Thanks so much ! Im glad you enjoyed :)
If you can avoid Wifi for shadow, use a network cable.
Hi Great video, follwed all the steps, already have A1111 installed, however, I get no interface loading i just get Using pytorch cross attention as the last command in the console and it never starts....any ideas?. I am using win 11...which im now fully aware is totally S*&T!!
FIXED...just for anyone coming across the same issue, Win 11 defender was stopping Comfi UI from loading and not informing me, so if this occurs simply allow the comfy folder through
Was looking for a tutorial on the viral joker lil yachty walkout.
I seen a walkthru video on this setup with viggle.
Versions prior to V0.22.2 will no longer detect missing nodes unless using a local database. Please update ComfyUI-Manager to the latest version.
Is NVDIA T4 good as Shadow is not available in my region
Yeah for sure, it’s a RTX and will run all of this. If you do not want an entire operating system to rent you could opt for Runpod or Rundiffuion. I like having an entire OS so I can run 3d programs and games on !
Me too . I was thinking Alicia Keys showcases her new role on RUclips. Anyways you are Beautiful Prompt Muse. And you are real with face . ❤🌹❤️ Congratulations
11:54 That manager button is missing here. I am confused
The manager needs to be installed check 6:40 mark for that 👍 needs to be install
Thanks a lot. Now everything works like a charme. @@promptmuse
Do you think there is a way to combine this with open pose?
There is indeed, there is a multi control net workflow, just download the open pose model via the comfyui manager and drop it down from the list, just like we did with the lineart: civitai.com/api/download/attachments/213 This work flow is from inner reflections :)
@@promptmuse Thanks a bunch! :) :)
Can I download Mac versions of those apps? I have 16 gb of ram on my Macbook.
Here you go my friend, I haven’t tested this method myself. But from the comments looks like it works :) www.reddit.com/r/StableDiffusion/comments/1506nfu/how_do_i_install_comfyui_on_a_mac/
is this SD only or does it also support SDXL?
It supports SDXL as well 👍
oK it works on my CPU , 30 min per image :)
Damn……… what nodes are you using. Are you upscaline or is this the basic default workflow??
Nice tutorial. I tried this on Macbook pro M1 Max, unfortunately it does not work with AnimateDiff. Though ComfyUi works for images. If anyone knows how to make AnimateDiff work, let me know.
I have AnimateDiff working in ComfyUI on an MBP M1 Max. Are you still in need of help?
@@larsisdahl Yes, I still need help. I followed the tutorial, all I get is a black image when rendering animation with AnimateDiff. I haven't found solution for this.
Do you see any error messages in the terminal window?@@rutababelyte7790
hi, i'm also using m1 running comfyui and when it runs to ksampler it shows error with "'ModuleList' object has no attribute '1'
" . do you have any idea what should i debug? @@larsisdahl
The inner reflections guide cannot be found
I had the same problem for a bit. Here's the link.
Now that SVD is out, is this obsolete now?
The current released version does not support vid2vid - But man, I can see it replacing pretty much everything out there in a few week 🤯 Its only on experimental phase, so much to come from SVD. It still runs using ComfyUI setup 👍
What is this ? Error occurred when executing MiDaS-DepthMapPreprocessor:
is 6gb vram is enough ?
this sounds like a rocket science cry
Your face is brighter than my future
Is it me or Prompt Muse talking cadence starting to sound a lot like Nerdy Rodent.
I don't have a "Manager" button
i finally did it ! your a life saver thank you sm for this tutorial \( ̄︶ ̄*\)
xsqueeze me! $40pm. I'm not gonna act like buying the parts and building a suitable rig will replace the convenience of generating anywhere from anything but damn! Good tutorial btw
🤣 yeah it’s quite painful spend on top of everything else atm, but it served me well for the interim as my PC was on it’s legs. New rig sorted now and it’s heaven 🙏
um 12 gb of vram isnt for "low end pcs"
She's too cute, I can't concentrate. Can she just read the phone book, that would be zexy.
4 videos later 3 hours later still can't work. How does anyone say this is easy.
Too hot can't focus ahhhhh
do you have of
Maybe have of