Will AnimateDiff v3 Give Stable Video Diffusion A Run For It's Money?
HTML-код
- Опубликовано: 13 май 2024
- AnimateDiff v3 gives us 4 new models - include sparse ControlNets to allow animations from a static image - just like Stable Video Diffusion. The motion module currently works in both Automatic1111 and ComfyUI, but what sort of animations does it generate?
Update! As predicted, 10 mins after the video release, the sparse control net is now supported 🎉
github.com/Lightricks/LongAni...
== More Stable Diffusion Stuff! ==
* Video-to-Video AI using AnimateDiff - • How To Use AnimateDiff...
* Installing ComfyUI - • How to Install ComfyUI...
* ComfyUI Workflow Essentials - • ComfyUI Workflow Creat...
* Faster Stable Diffusions with the LCM LoRA - • LCM LoRA = Speedy Stab...
* How do I create an animated SD avatar? - • Create your own animat...
* Installing Anaconda for MS Windows Beginners - • Anaconda - Python Inst...
* Add anything to your AI art in seconds - • 3 Amazing and Fun Upda...
* One image Gets You a Consistent Character in ANY pose - • Reposer = Consistent S...
Want to support the channel?
/ nerdyrodent
Thanks for watching! :) - Хобби
Happy Approximate Birthday Dearest Nerdy! 👋🎂🎉🎈
That hangover though 😆
I don't get stability can claim ownership of the data their model generates. How can they expect you to pay them to use images/video you make with it if they don't own the copyright to the images it creates? It would be like adobe trying to say they own what ever you make with photoshop. I'm sure they think of it differently, but it just seems like .. the wrong way to monetize, to me.
Either way I'm really glad to see things like this
Thank you! 🙏🏻😊🎉
And merry Christmas!
Brilliant! I've been waiting for this one. Thank you
You're very welcome!
yes yes yes! also, thanks for the safetensors tip
It's good to see that AnimateDiff is still improving. SVD produces neat stuff but screw that license.
Awesome as always - happy holidays to you as well!
Happy holidays!
Very cool! Merry Christmas Nerdy Rodent!
Thanks! You too!
Incredible!
Always enjoy your shares! Happy Holidays :)
Happy holidays!
Great video. Nice to see a bit of A1111 love too :)
Nerdy you are such a great content creator !! I love your videos and of course your comfyui workflows!!! also, your accent is quite easy to understand for us... non-English native speakers... UK accent?
Till date I am not able to see a image to image conversion with loosing the minimum object shape and basic lines like the face structure into different style using lora, if there is anything I am not aware of please share
Around the same time this was uploaded, SparseCtrl support has now been added into ComfyUI-Advanced-ControlNet
My prediction was correct! 😆
"Soft weights to replicate "My prompt is more important" feature from sd-webui ControlNet extension, and also change the scaling." that's a great update there.
Please Could you link long animated diff? i can not find it...
This is what I’ve been looking for, something to guide my animation using multiple still images. I’m a 3D Technical Artist so this is a BIG deal
Merry Christmas Nerdy
1:14 I hope the CEO of StabilityAI comes across your video and takes notes. Their current license doesn't offer even half of the benefits that other closed, paid, non-open-source models provide for the price they ask for.
Hello, what do you think about making a video about Dreamcraft 3d (an image to 3d model/methon), as nobody has yet tested it? Merry Christmas!
Happy shortest day of the year! According to Sp00ky, it was recently your birthday too. So happy birthday 💋… (us December babe’s have to stick together) 😉💗
💙
v3 is great (way better than SVD imo and no license), gen with it+lora have a tendency to make the skin in the "orange" side though, it's also visible on your video. I'm looking for a way to reduce this while keeping the lora. The lora is very usefull to reduce background flickering.
Try putting "sepia" in the negatives? Is it just the skin or the whole image color tone?
Add "colorful" and "saturated" to the positive prompt. Or use a post process to cure the red and green gamma
Where did you download the long versions? i am not able to find them online
So have you tried the sparse control models ? They seem to have been released now
6:33 why do you pop the lora loader into the model node? Shouldn't it be in a lora output / won't the result be altered if the original model is replaced?
How do I do simple single image input to animated gif using animatediff v3?
You can use the recently released sparse control net, which as predicted did indeed come out 10 minutes after the video 😆
Great work, wish I was able to do what you do with coding and everything,
Is there a guide to learn coding as in a proper roadmap that you could recommend ? Would really appreciate it
As an autodidact, I would suggest just getting in there and doing it! While I started out on things like pascal and basic, nowadays I’d suggest python as starting point. For a more structured method, look for classes in your area.
@@NerdyRodent thank you for replying, keep it up and hope you achieve many success along the way 👍🏼
Yo nerdyrodent can you explain how to do the rgb sparse control does becuase i cannot get it to work
Thanks for the vid! How long did those take to render? I’m struggling with render times, with AnimateDiff, a couple controlnets, and IPadapter - 80 frames of vid2vid takes over 20 mins on a powerful runpod machine. Is that the kind of ballpark you’re in?
I’d say usually around 5 mins for 80 frames, but extras like ipadapter will impact performance
@@NerdyRodent Thanks! 🙏
Getting there slowly, with sparse control and input keyframes we will have something actually useful.
It is possibal to rotate a object 360 degrees?? 🤔
Yes! Things like this will do full 360 - ruclips.net/video/j9-W1F7Dcdo/видео.html
We need some SDXL models
anywhere I could just download the workflow
I can pop the comparison workflow up on www.patreon.com/NerdyRodent if you like?
Excuse me, big guy, where is the workflow of this video?
If you like, I can pop the comparison flow up on www.patreon.com/NerdyRodent
10 min after your video ROFL... Merry Christmas
I should start picking lottery numbers 😉
@@NerdyRodent oh yes...
i like that in not here to line my pockets...i made that comment on this one guy who showed the best ai but it was all paid sites
Why still SD1.5? Why not to use SDXL Turbo?
Because SD1.5 is a free license
I think regular SDXL is free license too
@@MicahYaple
@@MicahYaple Can you develop the answer further?
@@JavierGarcia-td8ut Hello, SDXL Turbo is also don't allow commercial use in other words the license dont allow monetize in any means that includes you tube video monetization
@@DavidSeguraIA So just using images / animations using a SDXL model cannot be done if the RUclips channel is monetized? I mean, not selling the generative art itself... just using it in a video?
Much love to you my favorite rodent 🫶
💙