Transforming Your Videos Cannot Be Easier!
HTML-код
- Опубликовано: 28 сен 2024
- In this video, we delve into the SD-CN-Animation extension in Stable Diffusion. Creating new videos or modifying existing ones has never been easier. With detailed prompt descriptions, ControlNet, and LoRA, you can produce beautiful animations. The RAFT method significantly reduces the flickering problem.
Although the additional settings in vid2vid remain a mystery, I will share any new information on my Discord page once I discover more. Keep an eye out!
📣📣📣I have just opened a Discord page to discuss SD and AI Art - common issues and news - join using the link: / discord
🤙🏻 Follow me on Medium to get my Newsletter:
- Get UNLIMITED access to all articles: / membership
- Laura: / lauracarnevali
- Intelligent Art: / intelligent
📰 Medium Article:
/ sd-cn-animation-extension
📌 Links:
- GitHub SD-CN-Animation: github.com/vol...
- RAFT paper: arxiv.org/pdf/...
00:51 Discord Page - JOIN US ;)
02:15 Install the SD-CD-Extension
04:25 Text-to-video animation (txt2vid tab)
07:31 Processing strength (Step 1) and Fix frame strength (Step 2)
08:52 Where to find the outputs
09:16 Video-to-video (vid2vid tab)
15:07 Conclusions
Good video :) I started to using SD-CN-Animation a few days ago and its great, i use similar setting. I have three tips that increase the stability of the output video:
1. Take the image that you generate in IMG2IMG and in SD-CN add another ControlNet with "reference_only" and put that image in there (so you have two ControlNet, one with tile and another with reference_only with the image from IMG2IMG). The downside of this is that the procesing increase a lot :(, but you will have more coherence and better styilized video.
2. Sadly if you try to output a video with low resolution (below 512x512) the flickering is greater, so if you want amazing result its better to increase the output resolution.
3. If your input video has a lot of rapid moves (like a dance video) you will see a lot of ghosting in the video, you can decrase that effect with these settings (after a lot of trial and error):
"occlusion_mask_flow_multiplier": 3,
"occlusion_mask_difo_multiplier": 4,
"occlusion_mask_difs_multiplier": 3,
"step_1_processing_mode": 0,
"step_1_blend_alpha": 0,
If your video doesnt get the ghosting effect, dont put these setting, because it will increase the flickering
Thank you! pinned! :)
Thanks for this. I'm trying to use this to make some 3D rendered characters look a bit more realistic. It works pretty well but still trying to get the most consistent temporal coherence. Any new tips since you posted this? I'm really digging in and running tests to try and see what all the settings do exactly.
Thanks so much for this video. Hard to find info on using sd-cn-animation. Your video is super helpful !
you are the first person to explain controlnet in a way that makes sense to me. can't thank you enough, great job!
Thanks Laura 🫡
You’re so awesome!! Great teacher. I’m going to join your discord later when I get on my pc. I’ve learned more in 20 mins of watching your videos than most. You explain everything so well.
Thank you so Much, Laura for this tutorials. Much appreciated ❤🎉
What a great tutorial. Excellent presentation visually, aurally, and organizationally!
This was very helpful Laura. Thank you very much.
havent watched the video but saw the thumbnail and "video stable diffusion" sounds like something that would absolutely destroy my pc
🏆🏆🏆 love watching your videos . Thank you for the time you take to make them
impressive a good tool if you search really consistant animations. but very long process indeed. Nice video, thanks !
warp fusion tutraiol on Mac or runpod
is there option to use warp fusion for free
Thanks for this tutorial, I think this is the easiest method.
Hi Laura, i'm new to this, but I don't have ControlNet in my version (was this removed?)
I don't think it has been removed. Do you have controlnet installed and activated in the extensions tab?
Can you do this with Invoke AI's webui? Or it has to be Automatic1111?
Only Automatic1111 is fully supported
hi...does this work in a mac?
Yep, it does!
omg this girl is cute :) and not just cute, but that good kind of cute :)
No horny on main
it has problem with wildcard manager ..
sorted?
Can this method be used to make videos like this? they ran a music video through it: ruclips.net/video/O7-SCsgMgnk/видео.html
Probably it will take too much time, but you can use Ebsynth I suppsoe
What you want for this is Warpfusion, nothing else is remotely close.
Congrats Laura! I’m Brazilian and you are Italian speaking in English in your tutorials, which help me understand better your contents since we have similar native languages! Good job! I hope your channel could grow a lot! Continue giving us SD content! Thanks a lot! My name is Tio Ilmo!
Hi Tio Ilmo! Happy to hear that :)
Laura, your videos are perfect. You're so good at explaining, please keep producing content. I subscribed in 30 seconds flat plus your Italian accent, I could listen to it all day long!!
Thanks lovely ;)
Have to say… what a nice young lady! Subscribed.
Hi Laura, love your videos. I was just trying to follow your tutorial but I don't seem to be able to find SD-CN-Animation tab. I'm loading from the same "Extension index URL' but it mustn't exist any more? If you know a work around, please let me know. Thanks heaps.......
Sorry Laura, all sorted, but thanks for your great tutorials.......
result imo it's... just awful?
Amazing VIdeo, I will experiment with this soon, Although I used Img2img and converted a Batch of 30fps 15sec video, it turned out quite good in first attempt.
I am curious What GPU are you using ?
RTX 3060
✍👍
🎉🎉🎉🎉🎉
if there was a way to make it stop "Imagine" the image for each frame, would solve this. Is there any? Like, "Use the prompt just for the first frame and do your job. :)
Maybe in the upgrade, but not sure
It takes too much time with multiple CN, like am testing a 2 sec video gives me about 6 hr 45 mints 😅😅
Yeah it took me 4 h for a 11 seconds video! Probably something that needs improvement.
Can you do this with Invoke AI's webui? Or it has to be Automatic1111?
This is for a1111
I just try , and dont see the tab
Do you have any errors?
How to update sd latest 😊
git pull when in the main folder :)
places help me with thatThin-Plate-Spline-Motion-Model for SD.ipynb
What about that?
in step 3
AttributeError Traceback (most recent call last)
in ()
8 if predict_mode=='relative' and find_best_frame:
9 from demo import find_best_frame as _find
---> 10 i = _find(source_image, driving_video, device.type=='cpu')
11 print ("Best frame: " + str(i))
12 driving_forward = driving_video[i:]
1 frames
/usr/lib/python3.10/enum.py in __getattr__(cls, name)
435 return cls._member_map_[name]
436 except KeyError:
--> 437 raise AttributeError(name) from None
438
439 def __getitem__(cls, name):
AttributeError: _2D
In fact I uploaded same size png,mp4
Great video Laura, thank you! I joined your Discord also!! 🥳🥳
Thank you so much for the tutorial!
I unfortunately have an error message after trying to generate the first frame, that says "Torch not compiled with CUDA enabled".
Do you have some magic information to help? 😀
Hi, that is not an issue if you are using a mac. Do you see any other errors?
@@LaCarnevali I am using mac, but after generating the first frame i get this message en SD "An exception occurred while trying to process the frame: Torch not compiled with CUDA enabled", and no other error messages in warp
@@CrazyBullProduction try launching the webui.sh using the --no-half command:
./webui.sh --no-half
This is one of the clearest explanations I've experienced on Stable Diffusion.
Love your videos. Thank you
Thank you! - I tried the mov2mov extension for automatic1111 and like it much! - Wondering why not so many people use it.
Cause not many people are aware of their existence, which is understandable given the quantity of extensions for A1111
thank you it's really great and well eplained! i was wondering... what's the main difference with this plugin and the temporal kit one? :O
Hello!! :) What are you referring to when saying 'temporal kit plugin'?
Hi Laura! Your videos are awesome. What do you recommend for getting the best animation results, this method that you explain right here or the one from your previous video?
Hi, this is a very good one, but it takes a little of time
I hope you can see that the original video has her eyes closed and the output has not. Also, there’s an annoying flickering in the eyes. The reason for this is because the models are not properly trained for img2img. The models are trained with the faces always looking at the viewer. When you train a model for other platforms, like METAHUMANS or else, you have to do it with the eyeballs in all directions and dilatation. I’ve been trying to explain this to several RUclipsrs so the can put the word out, but nobody seems to understand the issue or even worst, they don’t care. So the problem will persist with flickering in the eyes and hair until the models are proper trained. This is of course from the eye of a professional. For TikTok videos I guess is alright?
Hi, happy to discuss this further. For the video, I haven't trained a model but just picked a random one. I think with ebsyinth there is less of this issue - anyway, I will try and train a model looking in all the directions and will test it. Happy to hear different point of view, mainly if constructive (like in this case)