Do you struggle with prompting? 🌟 Download a sneak peak of my prompt guide 🌟 No Membership needed: ⬇ Head over to my Patreon to grab your free copy now! ⬇ www.patreon.com/posts/sneak-peek-alert-90799508?Link&
Awesome guide thanks... just to add my observations, for vid2vid, adding ipadapter helps controlling the resulting animation, also along with depth and controlnet, i use hed ( or any line extractor you like ) but with very low strength , i usually use .2 other than that, try using lcm, it will help render faster... the lcm animatediff model is not as good as v3, so i use lcm lora for ksampler and use mmv3 model for animatediff
Thank you, these are fantastic additions! When creating the guide, my aim was to keep things straightforward for beginners. However, incorporating advanced techniques like lcm, ipadapter is definitely on the agenda for upcoming videos on animated diff. Your insights are invaluable, and I look forward to exploring these in more depth soon. Stay tuned!
320x576 is the optimal resolution for videos, you can upscale it 2x before putting it through the final ksampler and even after that if your pc can handle it before putting it through rife
Since some of you asked. You can find the basic workflows from innner reflections here: civitai.com/articles/2379/guide-comfyui-animatediff-guideworkflows-including-prompt-scheduling-an-inner-reflections-guide if you get an error because you need ffmpeg skip to 9:08 where i explain the installation. Dont forget to grab my free guides: FREE Workflow Guide:www.patreon.com/posts/get-your-free-99183367 FREE Prompt Guide: www.patreon.com/posts/sneak-peek-alert-90799508
I'm pretty new to all of this, today being my first day I've used comfyUI: I could not find any motion models in the dl link provided for Animatediff. However I figured they were the same models I used in a1111, mm sd v14 and version 15, pasted them in the folder and now I'm generating animations. The rest of your guide is kickass, I like how you can specify which frames will show what in the prompt and how you can specify a location for each section. 👍's up bro.
If you get an error from animatediff saying it needs ffmpeg, skip to 9:08 where i demonstrate the installation. Don't forget to download my free workflow guide: www.patreon.com/posts/get-your-free-99183367 and my free prompt guide: www.patreon.com/posts/sneak-peek-alert-90799508 Happy creating. 😊
The platform discussed in the video primarily focuses on creating and transforming animations using AI techniques like Stable Diffusion and animate diff. It is not designed for managing or coordinating group activities i am afraid.
I really enjoy your videos and appreciate the effort you put into them. However, I noticed the thumbnail posed the question, "Better than Sora?" that piqued my interest, but it I didn't hear it addressed in the video. It left me feeling a bit misled, as I was looking forward to that comparison. Perhaps in future videos, the thumbnail could more closely reflect the content? Thanks for considering my feedback!
Thank you for your thoughtful feedback and for highlighting this. I completely agree that the thumbnail may have set an expectation that wasn't fulfilled in the video, and for that, I sincerely apologize. I've updated the thumbnail to "Local AI Install" to better align with the video's content. Looking back, "Local Sora Alternative" might have been a more accurate choice, and I’ll keep this in mind for future content. Your feedback is invaluable in improving the clarity and honesty of my communication. Thanks again for your support and understanding!
@@AIKnowledge2Go Thank you so much for taking my feedback to heart and updating the thumbnail. It's refreshing to see creators like yourself who genuinely care about the audience's experience and the quality of their content. It's disheartening to see the trend of misleading tactics being used elsewhere on the platform, especially when it overshadows the hard work of genuine content creators. Your commitment to authenticity and quality really sets you apart. Thanks again for listening and for all the great work you do!
@@polystormstudio Engaging with my audience and receiving your feedback is a crucial part of staying true to myself and the content I create. Your comment is a heartfelt reminder of my core values, and for that, I'm deeply thankful. While navigating the fine line between catchy thumbnails and misleading clickbait, my goal is always to deliver on what's promised. Your recognition of the effort to maintain authenticity and quality means the world to me. Thank you once again for your support and for helping keep the spirit of genuine content creation alive.
Hi, unfortunately, I do not fully understand your question. Could you please provide more details or clarify what you need help with regarding the AnimateDiff Loader [Legacy]? You need to install the AnimateDiff Evolved package in ComfyUI in order to use this.
can you maybe help me with something..? I just cant figure it out... and i think im messing it up more and more. do you maybe have discord? maybe you can lead me into the right direction...@@AIKnowledge2Go
@@AIKnowledge2Go Schnell works fine, Dev doesn't. Ooh, I found the line: "cuda:0 NVIDIA GeForce RTX 4060 Ti" in the startup. I installed it with Stability Matrix.
I actually will make a video about in the near future, but don't expect wonders from it, apart from slight movements and maybe a little camera Turn. Happy creating
@@AIKnowledge2Go I don’t get why it’s so hard to create moving images? We can create a frame that’s amazing .. even in quality .. but in motion cohesive and stability breaks down
@@AIKnowledge2Gomaybe try using ipadapter and faceidplusv2 to generate image base on your own portrait then add a prompt for movement then send it to animatediff?
Do you struggle with prompting? 🌟 Download a sneak peak of my prompt guide 🌟 No Membership needed: ⬇ Head over to my Patreon to grab your free copy now! ⬇ www.patreon.com/posts/sneak-peek-alert-90799508?Link&
thank you so much I am really struggling to get good results like everyone else seems to have
Awesome guide thanks...
just to add my observations, for vid2vid, adding ipadapter helps controlling the resulting animation, also along with depth and controlnet, i use hed ( or any line extractor you like ) but with very low strength , i usually use .2
other than that, try using lcm, it will help render faster... the lcm animatediff model is not as good as v3, so i use lcm lora for ksampler and use mmv3 model for animatediff
Thank you, these are fantastic additions! When creating the guide, my aim was to keep things straightforward for beginners. However, incorporating advanced techniques like lcm, ipadapter is definitely on the agenda for upcoming videos on animated diff. Your insights are invaluable, and I look forward to exploring these in more depth soon. Stay tuned!
320x576 is the optimal resolution for videos, you can upscale it 2x before putting it through the final ksampler and even after that if your pc can handle it before putting it through rife
Thanks for the heads up on resolution tricks! You are right.
Since some of you asked. You can find the basic workflows from innner reflections here: civitai.com/articles/2379/guide-comfyui-animatediff-guideworkflows-including-prompt-scheduling-an-inner-reflections-guide
if you get an error because you need ffmpeg skip to 9:08 where i explain the installation.
Dont forget to grab my free guides:
FREE Workflow Guide:www.patreon.com/posts/get-your-free-99183367
FREE Prompt Guide: www.patreon.com/posts/sneak-peek-alert-90799508
I'm pretty new to all of this, today being my first day I've used comfyUI: I could not find any motion models in the dl link provided for Animatediff. However I figured they were the same models I used in a1111, mm sd v14 and version 15, pasted them in the folder and now I'm generating animations. The rest of your guide is kickass, I like how you can specify which frames will show what in the prompt and how you can specify a location for each section. 👍's up bro.
Welcome to the world of comfyUI! Thanks for the remark. This is all changing so fast, i will surly make an updated video anytime soon. Happy creating.
If you get an error from animatediff saying it needs ffmpeg, skip to 9:08 where i demonstrate the installation.
Don't forget to download my free workflow guide:
www.patreon.com/posts/get-your-free-99183367
and my free prompt guide:
www.patreon.com/posts/sneak-peek-alert-90799508
Happy creating. 😊
Thank u very much. Can we control a group of people's activities by using this plateform? If yes please suggest me.
The platform discussed in the video primarily focuses on creating and transforming animations using AI techniques like Stable Diffusion and animate diff. It is not designed for managing or coordinating group activities i am afraid.
I really enjoy your videos and appreciate the effort you put into them. However, I noticed the thumbnail posed the question, "Better than Sora?" that piqued my interest, but it I didn't hear it addressed in the video. It left me feeling a bit misled, as I was looking forward to that comparison. Perhaps in future videos, the thumbnail could more closely reflect the content? Thanks for considering my feedback!
Thank you for your thoughtful feedback and for highlighting this. I completely agree that the thumbnail may have set an expectation that wasn't fulfilled in the video, and for that, I sincerely apologize. I've updated the thumbnail to "Local AI Install" to better align with the video's content. Looking back, "Local Sora Alternative" might have been a more accurate choice, and I’ll keep this in mind for future content. Your feedback is invaluable in improving the clarity and honesty of my communication. Thanks again for your support and understanding!
@@AIKnowledge2Go Thank you so much for taking my feedback to heart and updating the thumbnail. It's refreshing to see creators like yourself who genuinely care about the audience's experience and the quality of their content. It's disheartening to see the trend of misleading tactics being used elsewhere on the platform, especially when it overshadows the hard work of genuine content creators. Your commitment to authenticity and quality really sets you apart. Thanks again for listening and for all the great work you do!
@@polystormstudio Engaging with my audience and receiving your feedback is a crucial part of staying true to myself and the content I create. Your comment is a heartfelt reminder of my core values, and for that, I'm deeply thankful. While navigating the fine line between catchy thumbnails and misleading clickbait, my goal is always to deliver on what's promised. Your recognition of the effort to maintain authenticity and quality means the world to me. Thank you once again for your support and for helping keep the spirit of genuine content creation alive.
How to loading for system, AnimateDiff Loader [Legacy] ?
Hi, unfortunately, I do not fully understand your question. Could you please provide more details or clarify what you need help with regarding the AnimateDiff Loader [Legacy]? You need to install the AnimateDiff Evolved package in ComfyUI in order to use this.
@@AIKnowledge2Go I always make mistakes in this job. Do you have a Discord address? I want to show it in more detail
thank you very much, I appreciate you a lot. Awesome contribution to the community
Your appreciation is like a warm hug for my content creation soul!
can you maybe help me with something..? I just cant figure it out... and i think im messing it up more and more. do you maybe have discord? maybe you can lead me into the right direction...@@AIKnowledge2Go
For some reason my NVIDIA GeForce RTX 4060 Ti 16gb is saying it doesn't have enough vram when I try to run Comfy Flux. WebForge Flux works fine.
is your comfy UI updated to the latest version? what cuda version do you have installed on your machine?
@@AIKnowledge2Go Schnell works fine, Dev doesn't. Ooh, I found the line: "cuda:0 NVIDIA GeForce RTX 4060 Ti" in the startup. I installed it with Stability Matrix.
Is there an SDXL or Pony version of this?
I haven't checked recently, but maybe its time i do...
How do we animate using our own photos ? I’m not interested in prompt characters! 😂
I actually will make a video about in the near future, but don't expect wonders from it, apart from slight movements and maybe a little camera Turn. Happy creating
@@AIKnowledge2Go I don’t get why it’s so hard to create moving images? We can create a frame that’s amazing .. even in quality .. but in motion cohesive and stability breaks down
@@AIKnowledge2Gomaybe try using ipadapter and faceidplusv2 to generate image base on your own portrait then add a prompt for movement then send it to animatediff?
Sage mir das du deutsch bist ohne es zu sagen 😂😂😂
Ich könnte aber auch Schweitzer oder Österreicher sein... 😂