Bring Images to LIFE with Stable Video Diffusion | A.I Video Tutorial
HTML-код
- Опубликовано: 28 май 2024
- Stable Video Diffusion is a powerful AI tool that turns images into short videos. This tutorial will teach you how to easily create videos from your pictures and text prompts using SVD.
ThinkDiffusion: bit.ly/3PoCkoN
How to Install ComfyUI: • Generate AI Videos Wit...
ComfyUI: bit.ly/3LM1hbN
ComfyUI manager: bit.ly/3LRqYrg
SDV img2video model: bit.ly/3Rkl18h
Base Workflows: bit.ly/3RkO6Aj
My Custom wokflows: bit.ly/3TkQ5Y0
SDXL model: bit.ly/3v1hIeA
Topaz Video AI: bit.ly/3t04Otl
Disclaimer: Some links in the description are affiliate links. If you make a purchase through them, I may earn a small commission at no extra cost to you.
⏲ Chapters:
0:00 Stability.ai's video model
0:16 How to Run Stable Video Locally
0:16 How to Run Stable Video Online
1:48 Why use ThinkDiffusion?
2:49 Image to Video
6:25 Upscaling AI Video
7:00 Text to Image
7:54 AnimateDiff Tutorial
Support me on Patreon:
bit.ly/2MW56A1
🎵 Where I get my Music:
bit.ly/3boTeyv
🎤 My Microphone:
amzn.to/3kuHeki
🔈 Join my Discord server:
bit.ly/3qixniz
Join me!
Instagram: / justmdmz
Tiktok: / justmdmz
Twitter: / justmdmz
Facebook: / medmehrez.bss
Website: medmehrez.com/
#AIvideo #stablediffusion #ThinkDiffusion
Who am I?
-----------------------------------------
My name is Mohamed Mehrez and I create videos around visual effects and filmmaking techniques. I currently focus on making tutorials in the areas of digital art, visual effects, and incorporating AI in creative projects. - Кино
The future of video seems WILD 🚀
ThinkDiffusion: bit.ly/3PoCkoN
- if you can't see the manager after installing, make sure you have the latest version of GIT and re-install, if it persists, retry the installation methods here: civitai.com/models/71980/comfyui-manager
ty for the workflow! it really saves me alot of hassle having to use other websites just to animate my images!
The best person to explain the montage method in A.I
Thank you bro
Very concise tutorials. All your tutorials always help me. 👍👍
glad to hear
Bro you are the best tutorial person I've found. Thanks
glad I could help
very useful!
Thank you!
If you want longer videos, just save the final frame and run it again as the new source image. It may take a few attempts to get camera motion that doesn't make you sick, but it'll get the job done.
great idea, but there might be a visible cut, did you try it ?
Thanks again!!! Great tutorial
Glad you liked it!
@@MDMZ Thanks to your help, I just tried Think Diffusion, it is very good, it is very fast and easy to use. This is what I did: ruclips.net/user/shortsGy-HeW3Stj8?si=sc1i8Z6qiuACuC9V
THE BEST TUTORIAL!!! THANKS THANKS GRACIIIIIAAAAAAAAS 😍😍😍
Glad it was helpful!
Your tutorials are much easier to follow than most RUclipsrs. Keep it up!
But if you have time, you still need to do the controlnet animatediff or any video generation method using controlnet.
Noted! and thanks a lot
BIG BIG BIG FACTS!!! FULL SUPPORT INSTANT SUB'D!
love this video
What ever the others are. You re allways next level and complete describer. Thank you brother❤
this means a LOT 🙏
Awesome bro 😎 big fan ❤
Great and easy tutorial, and thanks for the workflow template. Though I'm not able to find the Video Combine node.
did yo utry installing it from the ComfyUI manager ?
Really good content 👍
Thank you 🙌
incredible, really helfull ❤❤
Glad it helped!
Great tutorial, thank you! Do you know a way to upscale the video directly in comfyUI? Not everyone has Topaz Video AI, its quite expensive...
there are ways to do it with custom nodes, I covered one example here: ruclips.net/video/kmZ5S2X55fU/видео.htmlsi=ehay3olnLfJckLUN&t=547
Awesome videos thank YOu!
Question is it possible to upload my custom models to thinkDiffusion (comfyUi)?
Yes you can!
con una laptop se puede hacer? tengo un i5 de 10ma generacion y una rtx 3060
hello, great guide. Congratulations. But I have a question, if for now the maximum limit is 5 seconds, how do some people animate the models that run on instagram? Just faceswap?
I think you are referring to video2video, I just posted a video on that, this is img2video
@@MDMZ tks
is it possible to control the result of the video by giving some prompts?
And another question. Is there a model to produce transition between two images/frames (I have frame images, but I want to turn them into a video and frames are not very close to each other so it cannot be used in conventional video editing software)?
I guess you can add prompt nodes to have more control, but I'm yet to try it myself, you can try using RunwayMl instead.
I don't have an answer for the second question, sorry :/
is there a way to add a Clip Text Encode module between the SVD_img2vid_Conditioning and the KSampler to add positive and negative prompt to have more control with the camera movement ?
interesting, not sure about that, but will look into it
@@MDMZ thank you very much, i think it could be an even greater tool to be able to influence the animation with prompts, not sure if its possible or not or even if the curent models are able to do that
@@ertezsssz I remember seeing motion models that allow that, but you can also try including movement description in the prompts, it usually helps
I searched on civitai without finding one
Can you do this on the normal Stable diffusion? With the normal UI, I hate this type of UI with tons of modules everywhere. If I installed comfy UI locally, would it be a separate ui from the normal Stable diffusion? It wouldn't launch the same for sure. Would that mess up my current SD local install?
this won't interfere with your local A1111 installation, I highly recommend you give it a shot, it looks complicated but it's much more flexible and gets easier with time
Hi dude, is there any method to increase the length of the generated video?
not that I know of :/
Hey I don't have pc
Can you do this on mobile especially in moon valley ai
I've actually covered this topic in the video
I'm currently using stable diffusion video on Mac m2 pro and I have the same problem with the ksampler... in particular this one: "Conv3D is not supported on MPS". Do somebody know how to deal with it and fix the problem? It would be very nice if someone can help me 🙏🏻
running on MAC can be tricky sometimes, looks like a PyTorch issue, you might be able to get some help here: github.com/cocktailpeanut/mac-svd-install
Please let me know if that worked
If it persists, I would definitely try an online solution instead
I installed the comfy ui manager, but the option isn't visible there. i updated the comfy ui but still the manager option is not showing. can you tell me what to do?
make sure you have the latest version of GIT and re-install, if it persists, retry the installation methods here: civitai.com/models/71980/comfyui-manager
can I make loop animated pictures, or what called cinemagraph with AI?
I don't think there are ways to make it look so seamless using these tools, unless there's a method I don't know of.
@@MDMZ Thank you.
The video Combine does not upload the video, it has an output that says filenames, i dont know what node I should use to see the video
you mean the video is not being saved to your output folder ?
is it possible to run this with laptop? Seems load heavy.
It will depend on the specs, you can definitely give it a shot
Can you do this with stable diffusion too?
this IS stable diffusion
is this only for PC?
you might be able to make it run on MAC, dome people did
I get this error. Does anyone have any idea about how to fix it?
Prompt outputs failed validation: Exception when validating node: VideoCombine.VALIDATE_INPUTS() got an unexpected keyword argument 'frame_rate'
VHS_VideoCombine:
- Exception when validating node: VideoCombine.VALIDATE_INPUTS() got an unexpected keyword argument 'frame_rate'
I had the same problem and fixed it by Update All in Manager.
@@luqaszoq It worked for me, thanks a lot for your help, mate!
is it possible text prompt?
yes, I've covered it in the video
Can this be done in Automatic1111?
I believe so
followed all the steps but i get
Prompt outputs failed validation
ImageOnlyCheckpointLoader:
- Value not in list: ckpt_name: 'svd.safetensors' not in (list of length 21)
any help?
have you downloaded the svd model? make sure it's in the right folder
You have to actually select the downloaded checkpoint from the list. The default name doesn't match the one you downloaded.
When I queue prompt, my problem is Cuda out of memory how to fix that?
it means your VRAM is a little low for it, did you try setting a lower resolution ? and maybe a smaller steps number
@@MDMZ how to set lower resolution?
Gen 2 is easier?
it definitely is, but to me SVD gives better results
cant find the manager option
check the pinned comment
Doesn't work 'NoneType' object has no attribute 'encode_image' "How to solve this?
which node is showing in red when yo uexecute?
how to do this locally
it's covered in the video
Is anyone else getting "500 - Internal Server Error" when trying to upload images? Or know how to fix that?
hi Jamie, might need more context, does this happen when u try to generate ? or literally when u browse for an image and try to upload it ? are u using ThinkDiffusion or running locally
i hate comfy ui any method to use that on automatic1111 ui?
why!?
@@MDMZ is harder to me understand that module mess better the automatic1111 its just tabs and options
node*
I have a problem like this% ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last): and here goes long list with the cherry on top - RuntimeError: MPS backend out of memory (MPS allocated: 15.61 GB, other allocations: 2.90 GB, max allowed: 18.13 GB). Tried to allocate 562.50 MB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure).
Prompt executed in 3701.64 seconds
Appreciate for any help. MacBook Pro m1 16GB
plz check the pinned comment
Time to cook my 4090
welcome to the club
not a TIFF file (header b'n' not valid)
please help man, i dont get this
happens with jpegs, png work fine
RuntimeError: input must be 4-dimensional
do you see any nodes turning red when you execute? I'm suspecting that you're loading a non-video file, or a corrupt video file
because you have an AMD card and running it with --directml thats what i found online
비디오 생성기
Think diffusion is a rip off of a much larger and much better service. Everyone uses that one. Way more apps. Way better service. Think diff is nothing special.
Except it's totally free
Lol what service
What service are you referring to?
@@maresionut-laurentiu7128 Think diffusion is free just for 30 minutes