Stable Warpfusion Tutorial: Turn Your Video to an AI Animation

Поделиться
HTML-код
  • Опубликовано: 2 июн 2024
  • The first 1,000 people to use the link will get a 1 month free trial of Skillshare skl.sh/mdmz06231
    Learn how to use Warpfusion to stylize your videos. Discover key settings and tips for excellent results so you can turn your own videos to Ai Animations
    Tech support: / discord
    📁Warpfusion Settings:
    bit.ly/42rJLPw
    🔗Links:
    Warpfusion v0.16(FREE & recommended): bit.ly/3pBh5X3
    Warpfusion v0.14: bit.ly/42HozoG
    DreamShaper: civitai.com/models/4384/dream...
    Stable WarpFusion local install guide: • Stable WarpFusion loca...
    Another local install guide: github.com/Sxela/WarpFusion/b...
    Best Custom Stable Diffusion Models stablecog.com/blog/best-custo...
    How to get good prompts: bit.ly/3IEAzjQ
    How to use Luma AI: • Create FPV-Like Videos...
    Disclaimer: Some links in the description are affiliate links. If you make a purchase through them, I may earn a small commission at no extra cost to you.
    ©️ Credits:
    Stock video: www.pexels.com/video/energeti...
    James Gerde: / gerdegotit
    Marc Donahue: / permagrinfilms
    Markus Paolo Pe Benito: / markuspaolo_
    Alex Spirin: / defileroff
    Noah Miller: / noahrobertmiller
    Willis Hsieh: / willis.visual
    Diesellord: / diesel_ai_art
    Stefano Knoll: / steknoll
    Josh Doctors: / fewjative
    patchesflows: / patchesflows
    Yüksel Aykilic: / designyukos
    Oleh Ibrahimov: / drimota.ai
    nointroproductions: / nointroproductions
    Positive Prompts:
    "0": [
    "realistic female beautiful statue of liberty is a rocky statue dancing, manhattan city skyline in the background, the environment is new york city in day time, realism, hyper detailed, cinematic lighting, photograpny, High detail RAW color art, diffused soft lighting, sharp focus, hyperrealism, cinematic lighting, unreal engine, 4k, vibrant colours, dynamic lighting, digital art, winning award masterpiece, fantastically beautiful, illustration, aesthetically, trending on artstation, art by Zdzisaw Beksiski x Jean Michel Basquiat, high quality, 8k, "
    ]
    Negative prompts:
    "0": [
    "smoke, fog, lowres, (bad anatomy:1.2), EasyNegative, multiple views, six fingers, black & white, monochrome, (bad hands:1.2), (text:1.2), error, cropped, worst quality, low quality, normal quality, jpeg artifacts, (signature:1.2), (watermark:1.3), username, blurry, out of focus, amateur drawing, colored, shading, displaced feet, out of frame, massive breasts, large breasts ,((ugly)), nude nsfw"
    ]
    ⏲ Chapters:
    0:00 Introducing Warpfusion
    0:34 How to start with Warpfusion
    1:08 Google colab: local vs online runtime
    2:01 How to transform a video
    2:34 What's an AI model?
    3:06 Settings
    8:35 How to run Warpfusion
    9:23 Animation preview
    9:30 How to change GUI settings
    12:06 How to export the animation
    12:36 Get featured
    12:49 Warpfusion + Luma AI
    Support me on Patreon:
    bit.ly/2MW56A1
    🎵 Where I get my Music:
    bit.ly/3boTeyv
    🎤 My Microphone:
    amzn.to/3kuHeki
    🔈 Join my Discord server:
    bit.ly/3qixniz
    Join me!
    Instagram: / justmdmz
    Tiktok: / justmdmz
    Twitter: / justmdmz
    Facebook: / medmehrez.bss
    Website: medmehrez.com/
    #warpfusion #ai #stablediffusion
    Who am I?
    -----------------------------------------
    My name is Mohamed Mehrez and I create videos around visual effects and filmmaking techniques. I currently focus on making tutorials in the areas of digital art, visual effects, and incorporating AI in creative projects.
  • КиноКино

Комментарии • 414

  • @MDMZ
    @MDMZ  11 месяцев назад +17

    Update: I recommend using Warpfusion v0.16: bit.ly/3pBh5X3
    Update 03/04: Just re-tested the same exact steps in the tutorial using v0.14 and Dreamshaper 8 model, it works perfectly!
    The first 1,000 people to use the link will get a 1 month free trial of Skillshare skl.sh/mdmz06231
    For tech support and other questions: discord.gg/YrpJRgVcax
    Don't forget #mdmz when you post your Warpfusion videos 😉🥳

    • @juanjuanchen6814
      @juanjuanchen6814 8 месяцев назад

      the problem is if I pay you, can I use it on a free colab or free kaggle account? if not, seeming useless

    • @kelvinpatricio8842
      @kelvinpatricio8842 7 месяцев назад

      I'm using v0_16_13 and the script is giving an error on Generate optical flow and consistency maps 🙁

    • @kelvinpatricio8842
      @kelvinpatricio8842 7 месяцев назад

      Can someone help me?

    • @KREOGHOSTOFFICIAL
      @KREOGHOSTOFFICIAL Месяц назад

      YOU ARE CONFUSING THE SHIT OUTTA ME BRO

  • @creatorsmafia
    @creatorsmafia 10 месяцев назад +1

    I'm definitely going to give it a try and experiment with different settings.

  • @bdwedgeofanimotion4106
    @bdwedgeofanimotion4106 11 месяцев назад +2

    amazing and it really does look good

  • @korujaa
    @korujaa 10 месяцев назад +2

    Very good, thanks !!!

  • @bdnwfantaziedreams
    @bdnwfantaziedreams 7 месяцев назад +1

    very nice and I always wondered how it was done, not easy but the output is impressive

    • @MDMZ
      @MDMZ  7 месяцев назад

      Thank you! Cheers!

  • @baraazidan4946
    @baraazidan4946 10 месяцев назад +1

    Wonderful 👍👍

  • @AiLabxArts
    @AiLabxArts 11 месяцев назад +1

    That's impressive!!

    • @MDMZ
      @MDMZ  11 месяцев назад

      🙏

  • @JuanPerez_2023
    @JuanPerez_2023 10 месяцев назад +1

    Amazing !!!!

  • @WajihSouilemm
    @WajihSouilemm 11 месяцев назад +1

    Cool bro !! 🔥

    • @MDMZ
      @MDMZ  11 месяцев назад

      🙏

  • @user-uk9qk3sj4b
    @user-uk9qk3sj4b 11 месяцев назад +5

    Please do a tutorial for the cola shorts clip it's so amazing

  • @chocaholic65
    @chocaholic65 6 месяцев назад +1

    This is an awesome tutorial ❤❤❤

    • @MDMZ
      @MDMZ  6 месяцев назад

      Thank you! Cheers!

  • @owensy365
    @owensy365 11 месяцев назад +1

    ty vv much legend❣

  • @Fyhan69
    @Fyhan69 11 месяцев назад +1

    Awesome. Great Tutorial, ❤

    • @MDMZ
      @MDMZ  11 месяцев назад

      Thank you! Cheers!

  • @saraeljamal5009
    @saraeljamal5009 11 месяцев назад +16

    great tutorial, I have followed another tutorial to train my own AI model using rendered images of a character and used it, my first try wasn't so successful ( not sure if the reason is the video or the model) , any chance you can perhaps create a tutorial on creating our own AI models and using it on warpfusion?

    • @MDMZ
      @MDMZ  11 месяцев назад +3

      I followed this once before and it worked great!: ruclips.net/video/kCcXrmVk1F0/видео.html

    • @saraeljamal5009
      @saraeljamal5009 11 месяцев назад +2

      @MDMZ, Thank you for your assistance! I managed to train my AI model and achieved some progress. However, I'm still struggling with maintaining consistency in masking the female's head throughout each frame. Initially, the mask works for a few frames, but then it starts to take on the form of the original face in the video.

    • @saumyajeetbhowmick7803
      @saumyajeetbhowmick7803 10 месяцев назад

      which video tutorial did you use

  • @kartunenetwork9232
    @kartunenetwork9232 11 месяцев назад +2

    thanks for the awesome tutorial! Looks amazing, only thing is mine keeps changing the subject's aesthetic looks and especially the face within a couple frames... is there a way to make it keep the same look as the first frame?

    • @MDMZ
      @MDMZ  10 месяцев назад +2

      you can try to fix that by scheduling

  • @FaithfulWord
    @FaithfulWord 9 месяцев назад +2

    how to keep the initial animation stable like that? so that the face and background isnt constantly changing?

  • @nizamkoc8261
    @nizamkoc8261 6 месяцев назад

    Best vid. Thanks

    • @MDMZ
      @MDMZ  6 месяцев назад

      Glad you liked it!

  • @consciousHarmony
    @consciousHarmony 10 месяцев назад

    In the "define SD + K functions, load model" section should I select CPU or GPU for the 'load_to' variable?

  • @borcan7287
    @borcan7287 11 месяцев назад +3

    Which is better, Warpfusion v0.14 or Stable WarpFusion v0.5.12 ?

  • @braedongarner
    @braedongarner 11 месяцев назад +53

    Took about 4 hours to render 4 seconds but man it looks buttery smooth. My 1080ti was really trying🤣

    • @MDMZ
      @MDMZ  11 месяцев назад +4

      glad it worked for you 😁

    • @AhvaBidu
      @AhvaBidu 11 месяцев назад +3

      970 here. I envy you! AhaHaHa

    • @Twigslap
      @Twigslap 11 месяцев назад +5

      About to try this today wish me luck lol

    • @Tamannasehgal19
      @Tamannasehgal19 11 месяцев назад +3

      I,ve GTX 1650 would it be okay?

    • @AhvaBidu
      @AhvaBidu 11 месяцев назад +3

      ​@@Tamannasehgal19 Yes. Better than a 970. But will take time. Oh, I think it's ok. I don't really know. Your card is better than mine, so...
      I will just shut up now.

  • @XViewer
    @XViewer 11 месяцев назад +1

    Nice

  • @BigBrisian
    @BigBrisian 11 месяцев назад +6

    Hi MDMZ, my run stopped at 'Video Masking' with the issue of 'NameError: name 'os' is not defined'. Would be amazing if you can help, thank you.

    • @AnnaBednarek
      @AnnaBednarek 8 месяцев назад +2

      Same here. Can somebody help us, please? :(

  • @ToMgRoEbE
    @ToMgRoEbE 10 месяцев назад

    If I have AMD GPU is it still safe to use the online version only/its the same as not having strong enough hardware?

  • @staffan_ofwerman
    @staffan_ofwerman 7 месяцев назад

    I tried to follow your instruction here with my own video clip, but I seem to get errors all the time. Maybe it's because there are new versions up and running now that behave different. What I'm looking for is to use the video clip I have (it's me in front of a green screen). I would like to change myself into something fun, like some kind of animation, but not all different. Just making me look animated. And still have the Green Screen in the background in the final output. Maybe it's not possible in WarpFusion or what do you think? Should I look at something else or is it possible to make this with the right prompt and right model? Just can't find any tutorials about it. And I thought your video was great.

    • @MDMZ
      @MDMZ  7 месяцев назад

      it is possible, I have instructions on how to keep the background untouched in this same tutorial, shooting on a green screen will definitely help with the separation. and YES, you should look into using a newer version

  • @rafaeladvincula4564
    @rafaeladvincula4564 11 месяцев назад

    Would you recommend using this to a horizontal 1080p video? I have an NVIDIA 3070.

    • @MDMZ
      @MDMZ  11 месяцев назад

      both will work fine, depends hwo you plan to use the output, if for IG/tiktok just go with vertical

  • @MarylandDevin
    @MarylandDevin 10 месяцев назад +1

    How does this compare to using stable diffusion image to image batching for creating a stylized look for videos?

    • @MDMZ
      @MDMZ  10 месяцев назад

      this is much more consistent

  • @minigabiworld
    @minigabiworld 11 месяцев назад +5

    Thank you so much! Great video! Does this also work for cartoon characters with different human proportions?

    • @CYBERNORM
      @CYBERNORM 11 месяцев назад

      Aah, sorry, I think we r out of cartoon characters.

  • @clash9927
    @clash9927 7 месяцев назад

    where can I find the stable_warpfusion_settings_sample document for the default_settings_path?

  • @AhvaBidu
    @AhvaBidu 11 месяцев назад +2

    You are a monster, man! And I own a GTX970 😂 so, some others tutorials are more "for me"

    • @MDMZ
      @MDMZ  11 месяцев назад +1

      Enjoy!

  • @dr.greenvil7679
    @dr.greenvil7679 10 месяцев назад +1

    Hey!
    I'm considering buying a new PC of 8GB VRAM. Since Warpfusion seems to require more than that(wich means I'd have to pay for Colab Pro anyway), is there any benefit of buying a better 8GB VRAM PC, or should I just stick with my Laptop?anks for the tutorial.

    • @MDMZ
      @MDMZ  10 месяцев назад

      depends on what you intend to use it for, 8GB is a bit low for SD

  • @LucidFirAI
    @LucidFirAI 10 месяцев назад

    Can I use my own GPU or do I need to pay for Google Colab?
    Can you achieve the same results with Temporal Kit?

  • @Heartog.Design
    @Heartog.Design 8 месяцев назад +1

    I'm 2 minutes in and I'm like 🤯 ... so many steps and it feels so complicated

    • @MDMZ
      @MDMZ  8 месяцев назад

      it only takes a bit of patience, you can do it!

  • @DearVMON
    @DearVMON 11 месяцев назад +1

    Awesome tutorial!! Quick question, I do have a windows pc, but was wondering will this work on a macbook as well?

    • @5XM-Film
      @5XM-Film 11 месяцев назад +1

      Obviously not for mac.
      Also would prefer if he would mention this right at the beginning 🤷🏻‍♂️

    • @MDMZ
      @MDMZ  11 месяцев назад +1

      It actually works on the cloud! So your OS doesnt matter

    • @MDMZ
      @MDMZ  11 месяцев назад

      I think you are referring to the local method, this is the online one 😉

    • @DearVMON
      @DearVMON 11 месяцев назад +1

      @@MDMZ ey hedheke ch7abit nafhm bch n3rf ala ena pc nkhdm kn juste tst7a9 fazt l. Collab w local install yhmch thtd b a relief hh, thank you for the info^^

    • @5XM-Film
      @5XM-Film 11 месяцев назад

      Can anybody help how to get this done with a mac?

  • @hurgerburger.
    @hurgerburger. 9 месяцев назад

    Do you need the later versions of warpfusion or can you use the earlier ones?

    • @MDMZ
      @MDMZ  9 месяцев назад

      It's best to use the latest

  • @CONCEPTSJRS
    @CONCEPTSJRS 11 месяцев назад

    question, will this tutorial basically work if i run it locally? Im not familiar with colab pro but i have a 4080.

    • @MDMZ
      @MDMZ  11 месяцев назад +1

      yes same process right after you connect to local run

  • @TheDroneExperiment
    @TheDroneExperiment 8 месяцев назад

    Quick Question. If I want to try to keep the original background which options do I select?

    • @MDMZ
      @MDMZ  7 месяцев назад

      I actually explain that in the video

  • @raunaksharma3604
    @raunaksharma3604 8 месяцев назад

    @MDMZ, While Processing Video Input setttings, I got the following error:
    NameError: name 'generate_file_hash' is not defined
    Please Guide

  • @koa8299
    @koa8299 5 месяцев назад

    this is probably the most complicated ai program i used by far. so many errors you cant find a fix for online and confusing settings you got to learn on your own because nobody has a full setting explanation for it. it took me almost 300 renders to understand what most settings do but i feel like its all going to be worth it once i get it all down.

    • @MDMZ
      @MDMZ  5 месяцев назад +1

      it's definitely challenging and can be frustrating at times, keep an eye on updates, newer notebooks are much more stable

    • @koa8299
      @koa8299 5 месяцев назад

      @@MDMZ lol turns out all i needed to do was tweak was the controlnet settings to get the output i desire. i had no clue consistency and controlnet correlated with eachother

  • @Voidedsomeone
    @Voidedsomeone 11 месяцев назад

    whats the song that people use for stabled diffusion

  • @triangulummapping4516
    @triangulummapping4516 10 месяцев назад

    How you increase the trails effect?

  • @vyasbrothers
    @vyasbrothers 4 месяца назад

    Hi super video..however I have been trying since 2 days..it disconnected at 20% .Is there any fix for that? Thank you in advance :)

  • @zuzana7366
    @zuzana7366 8 месяцев назад

    hey, how to only diffuse the background but keep the object original? whats the setting for this masking, thanksss

    • @MDMZ
      @MDMZ  7 месяцев назад

      I have covered that in the video

  • @sonnyalexis2204
    @sonnyalexis2204 10 месяцев назад

    Can we used for photo ??

  • @MikeBishoptv
    @MikeBishoptv 11 месяцев назад +1

    When I hit "run all' it can't get passed the "1.4 Install and import dependencies" section, says it's missing some modules (timm, lpips) been scouring discord and see others with this problem but no solutions. I'm using colab pro remotely on a Mac.

    • @MDMZ
      @MDMZ  11 месяцев назад

      did you try re-running? or using a different version ?

    • @MikeBishoptv
      @MikeBishoptv 11 месяцев назад

      @@MDMZ yeah I fixed it by downloading the latest version and not the one in your tutorial

    • @MDMZ
      @MDMZ  11 месяцев назад

      @@MikeBishoptv cool !

  • @jaknowsss
    @jaknowsss 11 месяцев назад

    Why my colab always reconnecting, when i reconnect all my settings will be back to default settings and i cant go back to the 1st i made

  • @radstartrek
    @radstartrek 11 месяцев назад +1

    bro, if you don't mind telling us, how many compute units did you use per video on average? especially that video you just showed?

    • @reubzdubz
      @reubzdubz 11 месяцев назад +2

      I burnt like 20 units just for a 13s vid lol

    • @radstartrek
      @radstartrek 11 месяцев назад

      @@reubzdubz wow man! thats some expensive job :D

    • @reubzdubz
      @reubzdubz 11 месяцев назад +1

      @@radstartrek that is if you follow the resolution in the video tho. I went down to 540x960 afterwards.

    • @radstartrek
      @radstartrek 11 месяцев назад

      @@reubzdubz ok, so it would cost even more compute units on something like 720p.

    • @MDMZ
      @MDMZ  11 месяцев назад

      honestly I have never documented as I was experimenting regularly with different resolutions and settings which affects the rendering time heavily, but yes the lower the resolution, the faster it runs

  • @theartforeststudio8667
    @theartforeststudio8667 11 месяцев назад +2

    Is there a way I could use warpfusion locally with automatic 1111? .
    Please make a tutorial on it 🙏

    • @MDMZ
      @MDMZ  11 месяцев назад

      you can use stable diffusion locally both with A1111 and warpfusion as well, I do have a stable diffusion tutorial on how to install it with A1111

    • @theartforeststudio8667
      @theartforeststudio8667 11 месяцев назад

      @@MDMZ thankyou!!! You mean a tutorial on using warpfusion with automatic 1111 , not Google colab. Right?

    • @MDMZ
      @MDMZ  11 месяцев назад

      @theartforeststudio8667
      Pretty much the same things just different platforms.
      warpfusion on google colab is used to run stable diffusion
      A1111 is used to run stable diffusion on your browser
      Both are set up and work differently, so it depends on which one u r more comfortable with

  • @Raharajabimindset-vg3rz
    @Raharajabimindset-vg3rz 3 месяца назад

    Thanks it was really usuful. When I save my video and run the last cell it tooks almost 1 hour to complete though the video that I diffused(out put video) would be almost 1 second. I don't really know what is wrong.

  • @dlysid
    @dlysid 10 месяцев назад

    Does anyone know many time does it take to make a 30 seconds video with warp fusion? I need to understand this in order to present in on a live activation! Many Thanks in advance!

    • @MDMZ
      @MDMZ  10 месяцев назад

      no one will be able to give you the correct answer, it depends on so many factors and it's pretty much impossible to predict until you run it.

  • @vyasbrothers
    @vyasbrothers 4 месяца назад

    Hi..thank you for the amazing videos ....but it keeps disconnecting after a few hours and it goes back to square one! how do I keep the connection alive?

    • @MDMZ
      @MDMZ  3 месяца назад

      I usually play a 10 hour youtube video on another tab 😅 you gotta keep your computer active

  • @user-tx8gq5zf1l
    @user-tx8gq5zf1l 8 месяцев назад

    Is there anyway to create videos like this on an iphone?

  • @user-ll3pp6lh6b
    @user-ll3pp6lh6b 11 месяцев назад

    Will it be on mobile?

  • @user-rq8km3us3r
    @user-rq8km3us3r 10 месяцев назад

    Can the generated video be used commercially

  • @MarylandDevin
    @MarylandDevin 10 месяцев назад +1

    Is this not part of stabled diffusion a1111 web ui, like an extension? This is it's own thing? Also, i have 12 gb vram. Does anyone have any input if similar ram worked for them? Thx

    • @MDMZ
      @MDMZ  10 месяцев назад

      this is its own thing

  • @sandeshgtm89
    @sandeshgtm89 11 месяцев назад +32

    Legends know it's re-uploaded 😅❤

    • @MDMZ
      @MDMZ  11 месяцев назад +2

      🤣 I confirm

    • @fashionrisk7675
      @fashionrisk7675 11 месяцев назад

      😂😂😂

    • @777sumitx
      @777sumitx 11 месяцев назад

      That's what I'm thinking like how he finished all the edits with one go 😔

    • @Evilra
      @Evilra 11 месяцев назад

      Lmfao

    • @Mayank-lf2ym
      @Mayank-lf2ym 11 месяцев назад

      But Why??

  • @cocoysalinas1
    @cocoysalinas1 10 месяцев назад

    Loved your video! Super Super Helpfull. Is there a way or a prompt to achieve a better lipsync or mouth movement? I'm struggling with this.

    • @MDMZ
      @MDMZ  10 месяцев назад

      not yet!

  • @TodosSomosTraders
    @TodosSomosTraders 9 месяцев назад

    Do you have the local tutorial?

  • @jessecallahan480
    @jessecallahan480 10 месяцев назад

    Do you need CUDA and Visual Studio installed to run this locally on Win 10

    • @MDMZ
      @MDMZ  10 месяцев назад

      you can follow the installation guide, the pre-required tools are listed there

  • @vivienatan8039
    @vivienatan8039 9 месяцев назад

    Hi does this work on MAC M2 chip?

  • @SA-Brawl
    @SA-Brawl 11 месяцев назад +1

    im using the free version of google colab so it doesent let it run do i need colab pro ?

    • @MDMZ
      @MDMZ  11 месяцев назад

      Hi, as explained in the video, colab pro will give you access to more resources

  • @jasontreyes8078
    @jasontreyes8078 2 месяца назад

    Does the AI have the capability of animating a drawing that I created (do I need to create the same subject in several angles?), and applying that drawing to a video, dance, walk or jumping video clip?

    • @MDMZ
      @MDMZ  2 месяца назад

      you can try image to video, I have a video on that

  • @Deviiiiiilllll
    @Deviiiiiilllll 10 месяцев назад +1

    hello I followed your video step by step until the launch of all the scripts but an error is displayed at optical map settings and it tells me NameError: name 'os' is not defined can you help me vp (I have already tried 3 times but still the same and I have took the warpfusion 0.16) )

    • @MDMZ
      @MDMZ  10 месяцев назад

      hi, check the pinned comment

    • @Deviiiiiilllll
      @Deviiiiiilllll 9 месяцев назад

      I still have to pay another subscription to make warpfusion work?

  • @jannroche
    @jannroche 10 месяцев назад

    Can u model a specific image instead of copying known ones like statue of liberty? I want to dance an image of myself for example ?

    • @MDMZ
      @MDMZ  10 месяцев назад

      in the example of using your own image, you will probably need to train a model first using your images, there are plenty of tutorials on how to do that on youtube

  • @triangulummapping4516
    @triangulummapping4516 9 месяцев назад

    After getting any error or server disconnection, is there a way to continue from the latest frame without running all the process again?

    • @MDMZ
      @MDMZ  9 месяцев назад

      You can use the resume run festure

  • @user-rq8km3us3r
    @user-rq8km3us3r 10 месяцев назад

    Are subscription members allowed unlimited use of generation

  • @hinlee1947
    @hinlee1947 11 месяцев назад +2

    I have a trouble about not having really good consistency, is there a tutorial about the settings to make it perfect?

    • @MDMZ
      @MDMZ  11 месяцев назад +2

      if you're seeking perfect consistency, we're not there yet! I suggest playing with the settings I covered, try enabling fixed_code, etc...

  • @VRMOTION
    @VRMOTION 11 месяцев назад +1

    You're a handsome man!!! I've been really looking forward to this video. And there is also a question, how to process VR1803D video in this way? After all, we cannot get consistently the same result for both lenses. (left and right)
    Please let me know if you have a guide for such a solution with style generation in VR180 3D video.
    Thank you. We will be following your news, with our whole small team.

    • @MDMZ
      @MDMZ  11 месяцев назад +2

      I'm not so familiar with VR, but you can try using the same seed for both videos, or render both videos side by side in a single file then run it through Warp, if that makes sense

    • @FirstLast-tx3yj
      @FirstLast-tx3yj 10 месяцев назад

      ​@@MDMZeverytime i run it locally i get the vram error
      And i could not find a way to install xformers to it (everything out there is about stable diffusion)
      How can i install xformers so that I lower the ram usage?
      Also it shows when running the code "no xformers module found" so it must work with xformers i just dont know what to change to activate it
      Please help

    • @johnnyc.31
      @johnnyc.31 9 месяцев назад

      Use A1111 and Deforum or Deforumation. You can control camera angles and more.

  • @MeowVibrations
    @MeowVibrations 10 месяцев назад

    First time please help, got error 1.2 Pytorch - 'No such file or directory: 'nvidia-smi''
    Followed the entire tutorial with no luck. None of them talk about switching the Notebook settings Hardware Accelerated from None - to GPU. I have no idea if im suppose to do that. but thats the only way I can get the error to go away and keep the runtime going past 1.2 .
    However, with this GPU setting, it finish down to the GUI cell then disconnect my runtime and would not connect. I then switch the Notebook setting back to None, and it connected to the runtime. but now I am back at square 1 with the 1.2 Pytorch Nvidia smi error.
    Please help!

    • @MDMZ
      @MDMZ  10 месяцев назад

      hi, check the pinned comment

  • @user-uk9qk3sj4b
    @user-uk9qk3sj4b 11 месяцев назад

    Can you do a tutorial for Deforum Stable Diffusion for google colab Because my installed version is not working

    • @MDMZ
      @MDMZ  11 месяцев назад

      will look into it

  • @KayaDeus
    @KayaDeus 10 месяцев назад

    On average how much does it cost to make a 30 second video? Supposing it's 1080 vertical and you use the online processing option

    • @MDMZ
      @MDMZ  10 месяцев назад

      very difficult to predict

  • @Howling_Moon
    @Howling_Moon 11 месяцев назад +3

    Which one you prefer? This Warpfusion or Difussion with it's Auto1111 interface? I tried this with stable difussion, got similar results and what's most important, it's free.

    • @MDMZ
      @MDMZ  11 месяцев назад

      I find this more consistent, perhaps I need to play around with A1111 a bit more

    • @BeetjeVreemd
      @BeetjeVreemd 11 месяцев назад

      What do you need exactly to make these kind of videos for free in Stable Diffusion?

    • @SultanHz
      @SultanHz 11 месяцев назад +1

      @@BeetjeVreemd did you find out how

    • @BeetjeVreemd
      @BeetjeVreemd 11 месяцев назад

      @@SultanHz Unfortunately no i didn't :(

    • @kubagacek7352
      @kubagacek7352 11 месяцев назад

      @@BeetjeVreemd did you find out by now ?

  • @thekarmicbrat
    @thekarmicbrat 8 месяцев назад

    Can this also work with still images or is it only video to video?

    • @MDMZ
      @MDMZ  8 месяцев назад

      for images i suggest you use stable diffusion on A1111, it's free and easier to use

  • @essencialreal
    @essencialreal 10 месяцев назад

    so, Do I have to pay on patreon to have acess online Warpfusion ? I did´t undersand how acess it. Can I buy it ? I can´t run on my PC. I have a poor 3070.

    • @MDMZ
      @MDMZ  10 месяцев назад

      you dont need your local GPU for this method

  • @fatjon6117
    @fatjon6117 7 месяцев назад

    which runtime should i use on colab? T4 or V100

    • @MDMZ
      @MDMZ  7 месяцев назад

      I recommend u try both, one will cost you more over the other, but u get more speed

  • @aidigitalgoddess
    @aidigitalgoddess 8 месяцев назад

    Hi, i used this tutorial and i have a question, why is my video at the end only 4 second if i uploaded video on 16 sec, did i do something wrong? i'm new in AI :(

    • @MDMZ
      @MDMZ  8 месяцев назад

      probably, check the step at 7:36 and make sure you set the right frame range, [0,0] to process all frames

  • @parzimav
    @parzimav 11 месяцев назад

    Does A111 stable diffusion capable of this output?

    • @MDMZ
      @MDMZ  11 месяцев назад +1

      technically yes, but warpfusion is way way easier

  • @triangulummapping4516
    @triangulummapping4516 10 месяцев назад

    How i can standby the process , turn off my laptop and continue later from the last frame generated?

    • @MDMZ
      @MDMZ  10 месяцев назад

      try using the resume_run feature

  • @Privacyking
    @Privacyking 11 месяцев назад +1

    I am having issues connecting to google colab to local host.... i have posted into discord on the issue

    • @goldalemanha6330
      @goldalemanha6330 11 месяцев назад

      Is it possible to do this on your cell phone or do you need a computer?

  • @soyeltama
    @soyeltama 8 месяцев назад

    I can't do it because google colab disconnects all the time in the 5th, 6th step so I have to start again. Is there any way to solve that?

    • @MDMZ
      @MDMZ  8 месяцев назад

      try using the latest version of warpfusion

  • @dougiejones628
    @dougiejones628 5 месяцев назад

    Does anyone know, can this be done using another image as reference instead of a text prompt?

    • @MDMZ
      @MDMZ  5 месяцев назад

      I believe it's possible now with IPadapter

  • @riyando
    @riyando 10 месяцев назад +1

    is there any free alternative?

  • @bboysounds
    @bboysounds 11 месяцев назад

    Hey! my run crashed at line 4:
    controlnet_multimodel = get_value('controlnet_multimodel',guis)
    NameError: name 'get_value' is not defined
    Could you help?

    • @MDMZ
      @MDMZ  10 месяцев назад

      hi, check the description

  • @user-ft9oz3si2p
    @user-ft9oz3si2p Месяц назад

    Are there any graphics card requirements for this? Can you tell me?

    • @MDMZ
      @MDMZ  Месяц назад

      not if you run it online just like in the video, if you run it locally, I recommend a GPU with atleast 12GB of VRAM

  • @jaknowsss
    @jaknowsss 11 месяцев назад

    hi there, is 4070ti with 12gb vram will work? for local runtime?

    • @MDMZ
      @MDMZ  11 месяцев назад +1

      yep should work fine

    • @jaknowsss
      @jaknowsss 11 месяцев назад

      @@MDMZ do you think 4070ti 12gb is faster than the one with the colab plan?

    • @MDMZ
      @MDMZ  11 месяцев назад

      @@jaknowsss I'm not sure 😅, anything stopping you from trying it out ?

    • @MDMZ
      @MDMZ  11 месяцев назад

      I suggest you try it locally first since u have 12gb, before paying for colab pro

  • @bigdaddysho962
    @bigdaddysho962 11 месяцев назад

    Hello dear sir, can I do it with Mac studio?

    • @MDMZ
      @MDMZ  11 месяцев назад +1

      Yes, you can! this works on the cloud so your computer's brand/model is irrelevant 😊😉

    • @bigdaddysho962
      @bigdaddysho962 11 месяцев назад

      @@MDMZ Thank you very much, stay healthy🙌

  • @jabeeyow186
    @jabeeyow186 9 месяцев назад

    i have an error says OS is not define how to fix it? tia

  • @stevopatiz
    @stevopatiz 8 месяцев назад +1

    I tried to link my video after I uploaded the file but I get "FileNotFoundError: [WinError 2] The system cannot find the file specified: '/FILENMAME'". I linked it just like you did in the video. Any help is appreciated!

    • @MDMZ
      @MDMZ  8 месяцев назад +1

      can you try the process from scratch? it might be referring to another setup file

    • @stevopatiz
      @stevopatiz 8 месяцев назад +1

      @@MDMZ I've uninstalled and reinstalled everything the local guide said to install. It seems it has trouble finding the video? I put everything in the same folder.

  • @chrissmarrujo5869
    @chrissmarrujo5869 11 месяцев назад

    Hi! Is this thing works with stable_warpfusion_v0_14_14.ipynb version?

    • @MDMZ
      @MDMZ  10 месяцев назад

      it should, you can always move on to the newest version, settings shouldnt be much different

  • @MREDZ
    @MREDZ 8 месяцев назад

    Not sure why, but when I try to open my 'run.bat' file after running the 'install.bat' file nothing happens. The command window just opens for half a second and then closes again. I've tried multiple times, including running it as administrator, but it just does the same thing. Is the run.bat file meant to behave this way, or is something wrong? :\

    • @MDMZ
      @MDMZ  8 месяцев назад

      weird, try reinstalling

  • @ojasvisingh786
    @ojasvisingh786 11 месяцев назад +2

    🎉🎉

  • @KoyaEry
    @KoyaEry 10 месяцев назад

    Will it also work when using a Macbook?

    • @MDMZ
      @MDMZ  10 месяцев назад

      i suggest you try, cause this is the cloud method

  • @dwmwi7216
    @dwmwi7216 10 месяцев назад +1

    1.4 import dependencies, define functions
    Runtime error

  • @myronkoch
    @myronkoch 11 месяцев назад +1

    will this work on a Mac m1?

    • @MDMZ
      @MDMZ  11 месяцев назад +1

      this is the online method, it should work, I suggest you try it out u have nothing to lose

  • @davidw717
    @davidw717 11 месяцев назад

    Anyone know of a free alternative to Warpfusion

  • @pbb
    @pbb 7 месяцев назад

    got an error on my first colab run:
    RuntimeError: Error(s) in loading state_dict for ControlLDM:
    size mismatch for model.diffusion_model.input_blocks.4.1.proj_in.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]).
    is it rejecting my model "sdxlUnstableDiffusers_v8HeavensWrathVAE.safetensors"?

    • @MDMZ
      @MDMZ  7 месяцев назад

      hi, please check the pinned comment

    • @pbb
      @pbb 7 месяцев назад

      @@MDMZ I was able to get through by only using the SD 1.4 model. Not able to get any SDXL models to work tho. Do you have any tutorial where you are using SDXL models by chance?

  • @ariofaisal3279
    @ariofaisal3279 8 месяцев назад

    so help full, inspiring, to copycat your tutorial, hopw works

    • @MDMZ
      @MDMZ  8 месяцев назад

      Have fun!

  • @elifmiami
    @elifmiami 29 дней назад

    Hello, Can we use different checkpoint ? I tried result is horrible

    • @MDMZ
      @MDMZ  29 дней назад

      yes you can

  • @Rajivgupta94
    @Rajivgupta94 11 месяцев назад

    there is an error , "NameError: name 'get_value' is not defined". how do I fix this. please help !

    • @MDMZ
      @MDMZ  11 месяцев назад

      hi, check the pinned comment for technical support

  • @goldalemanha6330
    @goldalemanha6330 11 месяцев назад +1

    Please bring a mobile option. I don't have a PC and I wanted to do this on my phone 😢

  • @ProtRifprottoyislam
    @ProtRifprottoyislam 11 месяцев назад

    is it not possible to do the same with stable diffusion?

    • @MDMZ
      @MDMZ  10 месяцев назад +1

      warpfusioin results are much more consistent