Stable Warpfusion Tutorial: Turn Your Video to an AI Animation
HTML-код
- Опубликовано: 2 июн 2024
- The first 1,000 people to use the link will get a 1 month free trial of Skillshare skl.sh/mdmz06231
Learn how to use Warpfusion to stylize your videos. Discover key settings and tips for excellent results so you can turn your own videos to Ai Animations
Tech support: / discord
📁Warpfusion Settings:
bit.ly/42rJLPw
🔗Links:
Warpfusion v0.16(FREE & recommended): bit.ly/3pBh5X3
Warpfusion v0.14: bit.ly/42HozoG
DreamShaper: civitai.com/models/4384/dream...
Stable WarpFusion local install guide: • Stable WarpFusion loca...
Another local install guide: github.com/Sxela/WarpFusion/b...
Best Custom Stable Diffusion Models stablecog.com/blog/best-custo...
How to get good prompts: bit.ly/3IEAzjQ
How to use Luma AI: • Create FPV-Like Videos...
Disclaimer: Some links in the description are affiliate links. If you make a purchase through them, I may earn a small commission at no extra cost to you.
©️ Credits:
Stock video: www.pexels.com/video/energeti...
James Gerde: / gerdegotit
Marc Donahue: / permagrinfilms
Markus Paolo Pe Benito: / markuspaolo_
Alex Spirin: / defileroff
Noah Miller: / noahrobertmiller
Willis Hsieh: / willis.visual
Diesellord: / diesel_ai_art
Stefano Knoll: / steknoll
Josh Doctors: / fewjative
patchesflows: / patchesflows
Yüksel Aykilic: / designyukos
Oleh Ibrahimov: / drimota.ai
nointroproductions: / nointroproductions
Positive Prompts:
"0": [
"realistic female beautiful statue of liberty is a rocky statue dancing, manhattan city skyline in the background, the environment is new york city in day time, realism, hyper detailed, cinematic lighting, photograpny, High detail RAW color art, diffused soft lighting, sharp focus, hyperrealism, cinematic lighting, unreal engine, 4k, vibrant colours, dynamic lighting, digital art, winning award masterpiece, fantastically beautiful, illustration, aesthetically, trending on artstation, art by Zdzisaw Beksiski x Jean Michel Basquiat, high quality, 8k, "
]
Negative prompts:
"0": [
"smoke, fog, lowres, (bad anatomy:1.2), EasyNegative, multiple views, six fingers, black & white, monochrome, (bad hands:1.2), (text:1.2), error, cropped, worst quality, low quality, normal quality, jpeg artifacts, (signature:1.2), (watermark:1.3), username, blurry, out of focus, amateur drawing, colored, shading, displaced feet, out of frame, massive breasts, large breasts ,((ugly)), nude nsfw"
]
⏲ Chapters:
0:00 Introducing Warpfusion
0:34 How to start with Warpfusion
1:08 Google colab: local vs online runtime
2:01 How to transform a video
2:34 What's an AI model?
3:06 Settings
8:35 How to run Warpfusion
9:23 Animation preview
9:30 How to change GUI settings
12:06 How to export the animation
12:36 Get featured
12:49 Warpfusion + Luma AI
Support me on Patreon:
bit.ly/2MW56A1
🎵 Where I get my Music:
bit.ly/3boTeyv
🎤 My Microphone:
amzn.to/3kuHeki
🔈 Join my Discord server:
bit.ly/3qixniz
Join me!
Instagram: / justmdmz
Tiktok: / justmdmz
Twitter: / justmdmz
Facebook: / medmehrez.bss
Website: medmehrez.com/
#warpfusion #ai #stablediffusion
Who am I?
-----------------------------------------
My name is Mohamed Mehrez and I create videos around visual effects and filmmaking techniques. I currently focus on making tutorials in the areas of digital art, visual effects, and incorporating AI in creative projects. - Кино
Update: I recommend using Warpfusion v0.16: bit.ly/3pBh5X3
Update 03/04: Just re-tested the same exact steps in the tutorial using v0.14 and Dreamshaper 8 model, it works perfectly!
The first 1,000 people to use the link will get a 1 month free trial of Skillshare skl.sh/mdmz06231
For tech support and other questions: discord.gg/YrpJRgVcax
Don't forget #mdmz when you post your Warpfusion videos 😉🥳
the problem is if I pay you, can I use it on a free colab or free kaggle account? if not, seeming useless
I'm using v0_16_13 and the script is giving an error on Generate optical flow and consistency maps 🙁
Can someone help me?
YOU ARE CONFUSING THE SHIT OUTTA ME BRO
I'm definitely going to give it a try and experiment with different settings.
amazing and it really does look good
Very good, thanks !!!
very nice and I always wondered how it was done, not easy but the output is impressive
Thank you! Cheers!
Wonderful 👍👍
That's impressive!!
🙏
Amazing !!!!
Cool bro !! 🔥
🙏
Please do a tutorial for the cola shorts clip it's so amazing
This is an awesome tutorial ❤❤❤
Thank you! Cheers!
ty vv much legend❣
Awesome. Great Tutorial, ❤
Thank you! Cheers!
great tutorial, I have followed another tutorial to train my own AI model using rendered images of a character and used it, my first try wasn't so successful ( not sure if the reason is the video or the model) , any chance you can perhaps create a tutorial on creating our own AI models and using it on warpfusion?
I followed this once before and it worked great!: ruclips.net/video/kCcXrmVk1F0/видео.html
@MDMZ, Thank you for your assistance! I managed to train my AI model and achieved some progress. However, I'm still struggling with maintaining consistency in masking the female's head throughout each frame. Initially, the mask works for a few frames, but then it starts to take on the form of the original face in the video.
which video tutorial did you use
thanks for the awesome tutorial! Looks amazing, only thing is mine keeps changing the subject's aesthetic looks and especially the face within a couple frames... is there a way to make it keep the same look as the first frame?
you can try to fix that by scheduling
how to keep the initial animation stable like that? so that the face and background isnt constantly changing?
Best vid. Thanks
Glad you liked it!
In the "define SD + K functions, load model" section should I select CPU or GPU for the 'load_to' variable?
Which is better, Warpfusion v0.14 or Stable WarpFusion v0.5.12 ?
Took about 4 hours to render 4 seconds but man it looks buttery smooth. My 1080ti was really trying🤣
glad it worked for you 😁
970 here. I envy you! AhaHaHa
About to try this today wish me luck lol
I,ve GTX 1650 would it be okay?
@@Tamannasehgal19 Yes. Better than a 970. But will take time. Oh, I think it's ok. I don't really know. Your card is better than mine, so...
I will just shut up now.
Nice
Hi MDMZ, my run stopped at 'Video Masking' with the issue of 'NameError: name 'os' is not defined'. Would be amazing if you can help, thank you.
Same here. Can somebody help us, please? :(
If I have AMD GPU is it still safe to use the online version only/its the same as not having strong enough hardware?
I tried to follow your instruction here with my own video clip, but I seem to get errors all the time. Maybe it's because there are new versions up and running now that behave different. What I'm looking for is to use the video clip I have (it's me in front of a green screen). I would like to change myself into something fun, like some kind of animation, but not all different. Just making me look animated. And still have the Green Screen in the background in the final output. Maybe it's not possible in WarpFusion or what do you think? Should I look at something else or is it possible to make this with the right prompt and right model? Just can't find any tutorials about it. And I thought your video was great.
it is possible, I have instructions on how to keep the background untouched in this same tutorial, shooting on a green screen will definitely help with the separation. and YES, you should look into using a newer version
Would you recommend using this to a horizontal 1080p video? I have an NVIDIA 3070.
both will work fine, depends hwo you plan to use the output, if for IG/tiktok just go with vertical
How does this compare to using stable diffusion image to image batching for creating a stylized look for videos?
this is much more consistent
Thank you so much! Great video! Does this also work for cartoon characters with different human proportions?
Aah, sorry, I think we r out of cartoon characters.
where can I find the stable_warpfusion_settings_sample document for the default_settings_path?
You are a monster, man! And I own a GTX970 😂 so, some others tutorials are more "for me"
Enjoy!
Hey!
I'm considering buying a new PC of 8GB VRAM. Since Warpfusion seems to require more than that(wich means I'd have to pay for Colab Pro anyway), is there any benefit of buying a better 8GB VRAM PC, or should I just stick with my Laptop?anks for the tutorial.
depends on what you intend to use it for, 8GB is a bit low for SD
Can I use my own GPU or do I need to pay for Google Colab?
Can you achieve the same results with Temporal Kit?
I'm 2 minutes in and I'm like 🤯 ... so many steps and it feels so complicated
it only takes a bit of patience, you can do it!
Awesome tutorial!! Quick question, I do have a windows pc, but was wondering will this work on a macbook as well?
Obviously not for mac.
Also would prefer if he would mention this right at the beginning 🤷🏻♂️
It actually works on the cloud! So your OS doesnt matter
I think you are referring to the local method, this is the online one 😉
@@MDMZ ey hedheke ch7abit nafhm bch n3rf ala ena pc nkhdm kn juste tst7a9 fazt l. Collab w local install yhmch thtd b a relief hh, thank you for the info^^
Can anybody help how to get this done with a mac?
Do you need the later versions of warpfusion or can you use the earlier ones?
It's best to use the latest
question, will this tutorial basically work if i run it locally? Im not familiar with colab pro but i have a 4080.
yes same process right after you connect to local run
Quick Question. If I want to try to keep the original background which options do I select?
I actually explain that in the video
@MDMZ, While Processing Video Input setttings, I got the following error:
NameError: name 'generate_file_hash' is not defined
Please Guide
this is probably the most complicated ai program i used by far. so many errors you cant find a fix for online and confusing settings you got to learn on your own because nobody has a full setting explanation for it. it took me almost 300 renders to understand what most settings do but i feel like its all going to be worth it once i get it all down.
it's definitely challenging and can be frustrating at times, keep an eye on updates, newer notebooks are much more stable
@@MDMZ lol turns out all i needed to do was tweak was the controlnet settings to get the output i desire. i had no clue consistency and controlnet correlated with eachother
whats the song that people use for stabled diffusion
How you increase the trails effect?
Hi super video..however I have been trying since 2 days..it disconnected at 20% .Is there any fix for that? Thank you in advance :)
hey, how to only diffuse the background but keep the object original? whats the setting for this masking, thanksss
I have covered that in the video
Can we used for photo ??
When I hit "run all' it can't get passed the "1.4 Install and import dependencies" section, says it's missing some modules (timm, lpips) been scouring discord and see others with this problem but no solutions. I'm using colab pro remotely on a Mac.
did you try re-running? or using a different version ?
@@MDMZ yeah I fixed it by downloading the latest version and not the one in your tutorial
@@MikeBishoptv cool !
Why my colab always reconnecting, when i reconnect all my settings will be back to default settings and i cant go back to the 1st i made
bro, if you don't mind telling us, how many compute units did you use per video on average? especially that video you just showed?
I burnt like 20 units just for a 13s vid lol
@@reubzdubz wow man! thats some expensive job :D
@@radstartrek that is if you follow the resolution in the video tho. I went down to 540x960 afterwards.
@@reubzdubz ok, so it would cost even more compute units on something like 720p.
honestly I have never documented as I was experimenting regularly with different resolutions and settings which affects the rendering time heavily, but yes the lower the resolution, the faster it runs
Is there a way I could use warpfusion locally with automatic 1111? .
Please make a tutorial on it 🙏
you can use stable diffusion locally both with A1111 and warpfusion as well, I do have a stable diffusion tutorial on how to install it with A1111
@@MDMZ thankyou!!! You mean a tutorial on using warpfusion with automatic 1111 , not Google colab. Right?
@theartforeststudio8667
Pretty much the same things just different platforms.
warpfusion on google colab is used to run stable diffusion
A1111 is used to run stable diffusion on your browser
Both are set up and work differently, so it depends on which one u r more comfortable with
Thanks it was really usuful. When I save my video and run the last cell it tooks almost 1 hour to complete though the video that I diffused(out put video) would be almost 1 second. I don't really know what is wrong.
Does anyone know many time does it take to make a 30 seconds video with warp fusion? I need to understand this in order to present in on a live activation! Many Thanks in advance!
no one will be able to give you the correct answer, it depends on so many factors and it's pretty much impossible to predict until you run it.
Hi..thank you for the amazing videos ....but it keeps disconnecting after a few hours and it goes back to square one! how do I keep the connection alive?
I usually play a 10 hour youtube video on another tab 😅 you gotta keep your computer active
Is there anyway to create videos like this on an iphone?
Will it be on mobile?
Can the generated video be used commercially
Is this not part of stabled diffusion a1111 web ui, like an extension? This is it's own thing? Also, i have 12 gb vram. Does anyone have any input if similar ram worked for them? Thx
this is its own thing
Legends know it's re-uploaded 😅❤
🤣 I confirm
😂😂😂
That's what I'm thinking like how he finished all the edits with one go 😔
Lmfao
But Why??
Loved your video! Super Super Helpfull. Is there a way or a prompt to achieve a better lipsync or mouth movement? I'm struggling with this.
not yet!
Do you have the local tutorial?
Do you need CUDA and Visual Studio installed to run this locally on Win 10
you can follow the installation guide, the pre-required tools are listed there
Hi does this work on MAC M2 chip?
im using the free version of google colab so it doesent let it run do i need colab pro ?
Hi, as explained in the video, colab pro will give you access to more resources
Does the AI have the capability of animating a drawing that I created (do I need to create the same subject in several angles?), and applying that drawing to a video, dance, walk or jumping video clip?
you can try image to video, I have a video on that
hello I followed your video step by step until the launch of all the scripts but an error is displayed at optical map settings and it tells me NameError: name 'os' is not defined can you help me vp (I have already tried 3 times but still the same and I have took the warpfusion 0.16) )
hi, check the pinned comment
I still have to pay another subscription to make warpfusion work?
Can u model a specific image instead of copying known ones like statue of liberty? I want to dance an image of myself for example ?
in the example of using your own image, you will probably need to train a model first using your images, there are plenty of tutorials on how to do that on youtube
After getting any error or server disconnection, is there a way to continue from the latest frame without running all the process again?
You can use the resume run festure
Are subscription members allowed unlimited use of generation
I have a trouble about not having really good consistency, is there a tutorial about the settings to make it perfect?
if you're seeking perfect consistency, we're not there yet! I suggest playing with the settings I covered, try enabling fixed_code, etc...
You're a handsome man!!! I've been really looking forward to this video. And there is also a question, how to process VR1803D video in this way? After all, we cannot get consistently the same result for both lenses. (left and right)
Please let me know if you have a guide for such a solution with style generation in VR180 3D video.
Thank you. We will be following your news, with our whole small team.
I'm not so familiar with VR, but you can try using the same seed for both videos, or render both videos side by side in a single file then run it through Warp, if that makes sense
@@MDMZeverytime i run it locally i get the vram error
And i could not find a way to install xformers to it (everything out there is about stable diffusion)
How can i install xformers so that I lower the ram usage?
Also it shows when running the code "no xformers module found" so it must work with xformers i just dont know what to change to activate it
Please help
Use A1111 and Deforum or Deforumation. You can control camera angles and more.
First time please help, got error 1.2 Pytorch - 'No such file or directory: 'nvidia-smi''
Followed the entire tutorial with no luck. None of them talk about switching the Notebook settings Hardware Accelerated from None - to GPU. I have no idea if im suppose to do that. but thats the only way I can get the error to go away and keep the runtime going past 1.2 .
However, with this GPU setting, it finish down to the GUI cell then disconnect my runtime and would not connect. I then switch the Notebook setting back to None, and it connected to the runtime. but now I am back at square 1 with the 1.2 Pytorch Nvidia smi error.
Please help!
hi, check the pinned comment
Can you do a tutorial for Deforum Stable Diffusion for google colab Because my installed version is not working
will look into it
On average how much does it cost to make a 30 second video? Supposing it's 1080 vertical and you use the online processing option
very difficult to predict
Which one you prefer? This Warpfusion or Difussion with it's Auto1111 interface? I tried this with stable difussion, got similar results and what's most important, it's free.
I find this more consistent, perhaps I need to play around with A1111 a bit more
What do you need exactly to make these kind of videos for free in Stable Diffusion?
@@BeetjeVreemd did you find out how
@@SultanHz Unfortunately no i didn't :(
@@BeetjeVreemd did you find out by now ?
Can this also work with still images or is it only video to video?
for images i suggest you use stable diffusion on A1111, it's free and easier to use
so, Do I have to pay on patreon to have acess online Warpfusion ? I did´t undersand how acess it. Can I buy it ? I can´t run on my PC. I have a poor 3070.
you dont need your local GPU for this method
which runtime should i use on colab? T4 or V100
I recommend u try both, one will cost you more over the other, but u get more speed
Hi, i used this tutorial and i have a question, why is my video at the end only 4 second if i uploaded video on 16 sec, did i do something wrong? i'm new in AI :(
probably, check the step at 7:36 and make sure you set the right frame range, [0,0] to process all frames
Does A111 stable diffusion capable of this output?
technically yes, but warpfusion is way way easier
How i can standby the process , turn off my laptop and continue later from the last frame generated?
try using the resume_run feature
I am having issues connecting to google colab to local host.... i have posted into discord on the issue
Is it possible to do this on your cell phone or do you need a computer?
I can't do it because google colab disconnects all the time in the 5th, 6th step so I have to start again. Is there any way to solve that?
try using the latest version of warpfusion
Does anyone know, can this be done using another image as reference instead of a text prompt?
I believe it's possible now with IPadapter
is there any free alternative?
Hey! my run crashed at line 4:
controlnet_multimodel = get_value('controlnet_multimodel',guis)
NameError: name 'get_value' is not defined
Could you help?
hi, check the description
Are there any graphics card requirements for this? Can you tell me?
not if you run it online just like in the video, if you run it locally, I recommend a GPU with atleast 12GB of VRAM
hi there, is 4070ti with 12gb vram will work? for local runtime?
yep should work fine
@@MDMZ do you think 4070ti 12gb is faster than the one with the colab plan?
@@jaknowsss I'm not sure 😅, anything stopping you from trying it out ?
I suggest you try it locally first since u have 12gb, before paying for colab pro
Hello dear sir, can I do it with Mac studio?
Yes, you can! this works on the cloud so your computer's brand/model is irrelevant 😊😉
@@MDMZ Thank you very much, stay healthy🙌
i have an error says OS is not define how to fix it? tia
I tried to link my video after I uploaded the file but I get "FileNotFoundError: [WinError 2] The system cannot find the file specified: '/FILENMAME'". I linked it just like you did in the video. Any help is appreciated!
can you try the process from scratch? it might be referring to another setup file
@@MDMZ I've uninstalled and reinstalled everything the local guide said to install. It seems it has trouble finding the video? I put everything in the same folder.
Hi! Is this thing works with stable_warpfusion_v0_14_14.ipynb version?
it should, you can always move on to the newest version, settings shouldnt be much different
Not sure why, but when I try to open my 'run.bat' file after running the 'install.bat' file nothing happens. The command window just opens for half a second and then closes again. I've tried multiple times, including running it as administrator, but it just does the same thing. Is the run.bat file meant to behave this way, or is something wrong? :\
weird, try reinstalling
🎉🎉
Will it also work when using a Macbook?
i suggest you try, cause this is the cloud method
1.4 import dependencies, define functions
Runtime error
will this work on a Mac m1?
this is the online method, it should work, I suggest you try it out u have nothing to lose
Anyone know of a free alternative to Warpfusion
got an error on my first colab run:
RuntimeError: Error(s) in loading state_dict for ControlLDM:
size mismatch for model.diffusion_model.input_blocks.4.1.proj_in.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]).
is it rejecting my model "sdxlUnstableDiffusers_v8HeavensWrathVAE.safetensors"?
hi, please check the pinned comment
@@MDMZ I was able to get through by only using the SD 1.4 model. Not able to get any SDXL models to work tho. Do you have any tutorial where you are using SDXL models by chance?
so help full, inspiring, to copycat your tutorial, hopw works
Have fun!
Hello, Can we use different checkpoint ? I tried result is horrible
yes you can
there is an error , "NameError: name 'get_value' is not defined". how do I fix this. please help !
hi, check the pinned comment for technical support
Please bring a mobile option. I don't have a PC and I wanted to do this on my phone 😢
is it not possible to do the same with stable diffusion?
warpfusioin results are much more consistent