Stable Video Diffusion - RELEASED! - Local Install Guide

Поделиться
HTML-код
  • Опубликовано: 28 авг 2024

Комментарии • 331

  • @OlivioSarikas
    @OlivioSarikas  9 месяцев назад +21

    #### Links from the Video ####
    Download Workflow: drive.google.com/file/d/17UQXmDRvPLLI7c6g76LlZF4BjMuoRHQa/view?usp=sharing
    SVD Model Download: huggingface.co/stabilityai/stable-video-diffusion-img2vid/tree/main
    Enigmatic_e Video: ruclips.net/video/imyQuIiuRnA/видео.html

    • @LouisGedo
      @LouisGedo 9 месяцев назад +1

      👋

    • @DerXavia
      @DerXavia 9 месяцев назад +40

      hey, you forgot the link to the Comfy UI Manager Extention :)

    • @hackernoonk4645
      @hackernoonk4645 9 месяцев назад

      that is so cool

    • @huymaivan8671
      @huymaivan8671 8 месяцев назад

      Is this possible to install this locally on PC with window 7???

    • @anonymous14
      @anonymous14 8 месяцев назад +2

      Please add to this comment that you have to click on RIFE VFI ckpt_name and then select either rife47.pth or rife49.pth
      There are several comments about people who got an error. thank you.,

  • @ChronicKPOP
    @ChronicKPOP 8 месяцев назад +6

    2023 and installs are still crazy. One day, we'll be so advanced, we can click "install" and things will get installed simply... one day.

  • @aivideos322
    @aivideos322 9 месяцев назад +27

    This is a good step, as soon as they allow LCM and text input to be added it will be game changing. The stability of the backgrounds is outstanding compared to animated diff but the motion of people is severely lacking as well as no lora support. I have high hopes for this to improve fast however.

  • @BarbaraBasso
    @BarbaraBasso 9 месяцев назад +8

    Thank you, Olivio! I was starting to doubt my computer's capabilities after my tests failed yesterday. However, after following this tutorial, everything clicked into place. It turns out that the crucial missing piece was the 'update all' step in my tests. Lesson learned!

  • @matthallett4126
    @matthallett4126 9 месяцев назад +3

    I'm in the process of making a WWII movie trailer with SVD. So fun.

  • @Arshoon
    @Arshoon 9 месяцев назад +10

    The age of AI never ending shows we can binge is coming.

    • @OlivioSarikas
      @OlivioSarikas  9 месяцев назад +1

      Doom Scrollers gonna be doomed 🤣

  • @Gh0sty.14
    @Gh0sty.14 9 месяцев назад +8

    I'm finding it actually works quite well at different resolutions. I've done 512x768, 768x768 etc and they turned out good.

    • @Steamrick
      @Steamrick 9 месяцев назад

      can it do 1280x720?

    • @VISAEON
      @VISAEON 9 месяцев назад +1

      @@Steamrick Im running it at 2416x1360

    • @OlivioSarikas
      @OlivioSarikas  9 месяцев назад +3

      yes, but 576x1024 is the suggested res, because that is what it was trained on. :)

    • @Gh0sty.14
      @Gh0sty.14 9 месяцев назад +1

      @@OlivioSarikas Yeah definitely. I ended up trying it because at native resolution I couldn't generate 25 frames without running out of vram lol

    • @SmartMoneyRyan
      @SmartMoneyRyan 7 месяцев назад

      @@Gh0sty.14 how much vram?

  • @Merlinvn82
    @Merlinvn82 9 месяцев назад +10

    RTX 3060 with 14 frames model took about 2 mins per render ❤❤❤

    • @mirek190
      @mirek190 9 месяцев назад +1

      with 3090 few seconds ;) ... around 10 seconds

    • @PokerGuts
      @PokerGuts 6 месяцев назад

      4070TI Super around 1 minute for the 25 frames model with a multiplier of 4

    • @MuratAtasoy
      @MuratAtasoy 5 месяцев назад

      rtx 3060ti, it take decates :/ because of my image 1024x1024??

    • @Merlinvn82
      @Merlinvn82 5 месяцев назад

      @@MuratAtasoy check if your GPU is running while generating video.... maybe 8Gb VRAM is not sufficient....

  • @Macatho
    @Macatho 9 месяцев назад +2

    Every time you're pronouncing the word "copy" incorrectly, exactly like my boss does 😅love it hah

  • @sirdrak
    @sirdrak 9 месяцев назад +34

    An important note: In the video you shows svd and svd_image_decoder as 14 fps and 25 fps models, but that's incorrect... The model for 25 fps video is svd_xt, located in another different hugginface link...

    • @alexdesouza8696
      @alexdesouza8696 9 месяцев назад

      so what's the difference then?

    • @JimboSkinner
      @JimboSkinner 9 месяцев назад

      Oh my gosh you’re right. I just grabbed the model from the post on the Hugging Face homepage and it was the xt version

    • @PyruxNetworks
      @PyruxNetworks 9 месяцев назад +3

      in model info it mentioned as "we also finetune the widely used f8-decoder for temporal consistency. For convenience, we additionally provide the model with the standard frame-wise decoder"

    • @KINGLIFERISM
      @KINGLIFERISM 9 месяцев назад +2

      Not all heros wear capes...

    • @Finhornify
      @Finhornify 9 месяцев назад +2

      Thanks for noticing and saying something.

  • @MarcGough
    @MarcGough 7 месяцев назад

    Olivio your attention to detail in sharing these workflows is phenomenal - thank you

  • @johnyau4290
    @johnyau4290 8 месяцев назад +9

    Hello, thank you for the tutorial! I'm wondering how I can get the RIFE VF1 and Video Combine nodes?

  • @vatoko
    @vatoko 7 месяцев назад

    Hooray! Happened! I figured it out in just 2 days )) Thank you very much!

  • @flavorbot
    @flavorbot 4 месяца назад

    thanks for the quick and easy breakdown have been putting off learning comfy but this helps a lot

  • @Make_a_Splash
    @Make_a_Splash 9 месяцев назад +4

    Thank you Olivio. I'm loving comfyUI, specially because it is old PC friendly. I'll give it a go.

    • @OlivioSarikas
      @OlivioSarikas  9 месяцев назад +3

      comfyUI is like Lego for AI. Makes me feel like a kid in a candy store :)

  • @celso2951
    @celso2951 9 месяцев назад +148

    I hate ComfyUI!!!! Really hate it with my heart!

    • @USBEN.
      @USBEN. 9 месяцев назад +23

      You not alone.

    • @fietsindeschie
      @fietsindeschie 9 месяцев назад +41

      skill issue

    • @audiogus2651
      @audiogus2651 9 месяцев назад +11

      Its great for obsessing over precision pores and moisture on eyelashes but for rapid ideation it stinks.

    • @vegacosphoto
      @vegacosphoto 9 месяцев назад +21

      I hear you bro. I'll wait for an Automatic 1111 or Gradio version

    • @Im_that_guy_man
      @Im_that_guy_man 9 месяцев назад +10

      Hate it too!

  • @tungstentaco495
    @tungstentaco495 9 месяцев назад +12

    excited for this, but I hate node based apps. Hopefully there will be an Auto1111 version soon.

    • @fenriswolf-always-forward
      @fenriswolf-always-forward 9 месяцев назад +1

      Applying emotions to an interface... Reminds me of the days when people fought over Playstations, Xbox and Nintendos. Facepalm.

    • @chelfyn
      @chelfyn 9 месяцев назад +1

      You don't have to make the node networks yourself, just grab one's that do what you want and get creative. plenty out there

    • @tungstentaco495
      @tungstentaco495 9 месяцев назад

      @@fenriswolf-always-forward your comment suggests there's no such thing as a bad interface. lol

  • @PCproffesorx
    @PCproffesorx 9 месяцев назад +6

    I still havent made the switch to comfy. I have a fairly complex workflow in a1111 that I am sure if I spent enough time could automate in comfy , but I like all the tinkering and inpainting and iterative steps that I use in a1111. In comfy I feel like you setup a workflow and its all done in one shot. I guess I could create a workflow for each step and have it work somewhat the same though.

    • @stefankrstic2529
      @stefankrstic2529 6 месяцев назад

      I automated my a1111 workflow with nodejs scripts and using a1111 API, but when I look at comfy it looks like you can do all of that in browser without need for coding. Plus everything new always work in comfy first xD So decided to switch today. Time to say bye bye to a1111

  • @4.0.4
    @4.0.4 9 месяцев назад +2

    Gonna wait for the A1111 version

  • @intangur
    @intangur 9 месяцев назад +3

    Thanks for the guide! Haven't used deforum much lately, and runwayml tends to get pricey. This will be fun to play around with.

    • @OlivioSarikas
      @OlivioSarikas  9 месяцев назад +1

      you are welcome. i love these new models :)

  • @SamBeera
    @SamBeera 9 месяцев назад +2

    Hi Olivio, thank you so much for this video, its clear step by step, helps someone to follow along and implement it. I have an older GPU - NVIDIA GeForce GTX 1080 Ti, and it takes quite a long time to render the video. How much time does it take on your system?

  • @letech5144
    @letech5144 7 месяцев назад

    Thank you for the detail instruction! I was able to create a 512x512 14 frames rocket video finally (Prompt executed in 599.47 seconds) with my rx580 4gb video card.

  • @niztheshpiz
    @niztheshpiz 9 месяцев назад +3

    Is it possible to add this extension to Stable Diffusion automatic1111 workspace?

  • @JL-sy2me
    @JL-sy2me 9 месяцев назад +1

    Finally! That looks so amazing.

  • @chardellbrown1821
    @chardellbrown1821 9 месяцев назад

    Ha, I was waiting for your take on the install my friend

  • @darrenvarley105
    @darrenvarley105 9 месяцев назад +1

    Kinda cool. Can see it being useful for making mini-clips for presentations or youtube vids (more interesting then putting still images I guess).

  • @Grognakwf
    @Grognakwf 9 месяцев назад +3

    With the 24 frame model I got around 6s/it, 14 frame model around 3s/it.
    8gb v-ram, rtx3070

  • @GibGiab-gc7qm
    @GibGiab-gc7qm 9 месяцев назад +1

    Thnks man!
    I will return to see in the comment the hardware needed.. no infos on the net..

  • @Gromst3rr
    @Gromst3rr 5 месяцев назад

    Thank you very much, Olivio!

  • @openroomxyz
    @openroomxyz 9 месяцев назад +6

    You did not mention how much VRAM requires to work ?

    • @Eleganttf2
      @Eleganttf2 9 месяцев назад

      12gb

    • @neofuturoai
      @neofuturoai 9 месяцев назад +1

      minimum 8GB, 6 if you do smaller image size

    • @mirek190
      @mirek190 9 месяцев назад

      for me takes 9 GB 25fps model @@Eleganttf2

  • @greysonwagner
    @greysonwagner 9 месяцев назад +11

    I know you say "ComfyUI" is the "future" of AI, but I can't bring myself to use it. The learning curve is much steeper than something like A1111. Yes you have massive control and I'm not unfamiliar with Node workflows (Unreal, etc) but in terms of mass marketability and ease of use for the average consumer, it's not the way to go in my opinion.

    • @Vestu
      @Vestu 9 месяцев назад +3

      Or it should have some Easy mode UI with most common building blocks and you could enter the current "advanced" mode

    • @Drew_pew_pew_pew
      @Drew_pew_pew_pew 9 месяцев назад +1

      @@Vestu StableSwarmUI might be something for you (even if it's still alpha). I learned to love comfyUI because it's the only thing running smooth on my potato pc.

    • @franciscodurand5209
      @franciscodurand5209 9 месяцев назад +1

      I was thinking the same like you, but believe me, after you first try it, you won't ever use Automatic1111 again. It is fun to watch the nodes "working" while rendering and it's very intuitive - even if it doesn't look like it at first glance@@Vestu

    • @Paperclown
      @Paperclown 9 месяцев назад +2

      @@franciscodurand5209 it's as fun as glaring at a cluttered house that is long overdue tidying.

    • @baza0
      @baza0 9 месяцев назад +1

      @@TPCDAZ With 8GB GPU sdxl doesn't work well in automatic 1111. I've disliked nodes for 20+ years but I just use somebody else's workflow file for comfy.
      I never need to change the connection wires or make new nodes. I've been using Foocus a lot though recently.
      So I think Auto1111 is for elites now.(unless using sd1.5 model)

  • @grzesiektg
    @grzesiektg 9 месяцев назад

    oh dang, will have to try it now! thanks for the sources! :)

  • @lcmiracle
    @lcmiracle 8 месяцев назад

    I'm eagerly looking forward to being able to generate my own Warhammer Fantasy Battle movie with nothing but a script I have generated using any random uncensored text-to-text model in a year's time

  • @KraftyMarketing
    @KraftyMarketing 4 месяца назад

    Thank you very much, awesome tutorial :)

  • @gizmomismo7071
    @gizmomismo7071 9 месяцев назад +6

    Do anybody know how much VRam you need to run this?

    • @MyWhyAI
      @MyWhyAI 9 месяцев назад

      You can run it on 6Gb of VRAM

    • @mirek190
      @mirek190 9 месяцев назад

      24 fps model takes 9 GB for me @@MyWhyAI

  • @Macieks300
    @Macieks300 9 месяцев назад +5

    I thought you need 40 GB of VRAM for SD Video to work? How much VRAM do you have, Olivio?

    • @mirek190
      @mirek190 9 месяцев назад

      For me takes 9 GB of 24 GB - 24 fps model ( rtx 3090 )

    • @Roninworld
      @Roninworld 8 месяцев назад

      3050 8 gb vram, 16 ram gb r7 1800x and take me 3 min to create a vid@@mirek190

  • @Byrdfl3wsNest
    @Byrdfl3wsNest 9 месяцев назад +3

    Amazing work as always Olivio! - If anyone here is getting an "Error occurred when executing KSampler: unsupported operand type(s) for *=: 'int' and 'NoneType'" and has a solution please let me know. Thanks!

    • @GBUK666
      @GBUK666 9 месяцев назад +3

      If you disable FreeU Advanced, problem goes away.

    • @Byrdfl3wsNest
      @Byrdfl3wsNest 9 месяцев назад

      It works! Thank you@@GBUK666Thank you!!! You legend!!!

    • @omarei
      @omarei 9 месяцев назад

      Do you mean to delete the FreeU node? I did that and still same error. Odd@@GBUK666

  • @loszhor
    @loszhor 9 месяцев назад +1

    Thank you for the information.

  • @amj2048
    @amj2048 9 месяцев назад

    can't wait to get some free time to check this out

  • @artlab6000
    @artlab6000 6 месяцев назад +1

    Did not see the link to the comfyui manager extension. : ) thanks for the vid

  • @huyked
    @huyked 9 месяцев назад +2

    What's the minimum requirements in regard to VRAM, RAM, graphics card, etc.?

    • @baza0
      @baza0 9 месяцев назад +1

      Just did the beach vid. on rtx2070 8gb, took 20min. My card is underclocked 10%. Used the 25fps model listed on Enigmatic_e Video.

  • @wowd-rt7rc
    @wowd-rt7rc 9 месяцев назад +2

    Please help, I've gotten these errors before and gave up in the past with other workflows.. I've attempted with my main Comfyui install and a fresh one.
    I've run the manager and installed all missing nodes, did an update check, and restarted multiple times.
    When loading the graph, the following node types were not found:
    ImageOnlyCheckpointLoader
    SVD_img2vid_Conditioning
    FreeU_V2
    VideoLinearCFGGuidance
    Nodes that have failed to load will show as red on the graph.

    • @michaelharper9174
      @michaelharper9174 9 месяцев назад +1

      im having the same problem! when i check for missing nodes nothing shows up :(

  • @hashir
    @hashir 9 месяцев назад +1

    Unfortunately doesn’t work on my M2 Mac MBP. Getting “RuntimeError: Conv3D is not supported on MPS”

  • @brendensingh3676
    @brendensingh3676 7 месяцев назад

    You jumped from the SVD to having it open with no-information on pathing or troubleshooting. Totally hopeful.

  • @ogrekogrek
    @ogrekogrek 7 месяцев назад

    thank you Oliviio

  • @jtreedy116
    @jtreedy116 9 месяцев назад +2

    Anyone know if there is a way to choose which direction the resulting video pans left vs right?

  • @Tom_Neverwinter
    @Tom_Neverwinter 9 месяцев назад +1

    but what did comfui do that the others didnt here...

  • @tripleheadedmonkey6613
    @tripleheadedmonkey6613 9 месяцев назад +4

    Awesome video! As someone else stated below, the SVD-XT model is for the larger FPS count though. Otherwise, perfect video :D

  • @eliluong
    @eliluong 9 месяцев назад

    thanks for the guide!

  • @touaxiong3452
    @touaxiong3452 9 месяцев назад

    thank you for the tutorial it help a lot

  • @kendollridenour4853
    @kendollridenour4853 9 месяцев назад

    (Anyone else having problems when they try to update all in the comfy ui manager after i update it continuously says restarting) i just got it working great video i do wish yo explained some parts better

  • @ArtificialBeauties
    @ArtificialBeauties 9 месяцев назад

    Thanks for sharing 💗

  • @PokerGuts
    @PokerGuts 6 месяцев назад

    Thanks for the helpful video. Do you think there is a enhance detail node I can apply at some point in the workflow that will eliminate the blurriness and disfigurement that happens at times?

  • @thanksfernuthin
    @thanksfernuthin 9 месяцев назад +3

    Thanks for being so clear. You're all about ComfyUI. I don't need to subscribe to you anymore.

  • @lmbits1047
    @lmbits1047 8 месяцев назад

    Looks like those glasses were added with stable video diffusion. Easy to tell.

  • @FF_LL_XX
    @FF_LL_XX 7 месяцев назад

    Thanks for the nice tutorial - i have one question - in which node i can change the length of the final video? Best Felix

  • @jaysonwall434
    @jaysonwall434 8 месяцев назад

    Awesome! So helpful! :) :) :)

  • @CineSolutions
    @CineSolutions Месяц назад

    I love comfy u.i.

  •  9 месяцев назад

    Cool thank you !

  • @AutumnRed
    @AutumnRed 9 месяцев назад +1

    I've tried this but for most pictures all I get is a simple animation that could be done much faster and easier with any mobile phone app for video editing, I hope this AI improves soon though

  • @vladimirrog
    @vladimirrog 7 месяцев назад

    Thank you

  • @dogewoof6286
    @dogewoof6286 8 месяцев назад

    thanks Tyson fury

  • @Stick3x
    @Stick3x 9 месяцев назад

    Comfy ui is awesome. I am used to nodes in DaVinci resolve so perfect.

  • @metatron3942
    @metatron3942 9 месяцев назад +1

    Image complexity doesn't seem to matter as long as you have shallow depth field like a distinguishing difference between the foreground person or object and the background such as a clear border.

    • @OlivioSarikas
      @OlivioSarikas  9 месяцев назад +1

      true, but what i meant to say is that it won't do a complex dance move or anything like what you see in the deforum and kaiber videos

  • @sb6934
    @sb6934 9 месяцев назад

    Thanks!

  • @cosmiccarp5030
    @cosmiccarp5030 9 месяцев назад

    Thanks Olivio, and don't worry, many of us can handle both A1111 and Comfyui. Keep em coming. 24GB Gang Gang-

  • @beatle72298
    @beatle72298 7 месяцев назад

    Has anyone else noticed that when using this default workflow that the output video appears "smooth" or smeared in comparison to the input image? This even happens when starting with a lower resolution image.

  • @nauseouscustody1440
    @nauseouscustody1440 9 месяцев назад

    👍Cool as ever

  • @lKaos66
    @lKaos66 2 месяца назад

    Thanks a lot man. I have a problem: when installing all the missing nodes, all of them install correctly but one: the VHS VideoCombine. I get the same error again and again:
    "When loading the graph, the following node types were not found:
    VHS_VideoCombine
    Nodes that have failed to load will show as red on the graph."
    I tried to update it, fix it, and even deleting it and reinstalling it, but nothing seems to work for me.
    Do you know what could it be? Thanks!!

  • @CaptainKokomoGaming
    @CaptainKokomoGaming 9 месяцев назад +1

    my favourite way is not with comfy.....

  • @AephVeyniker
    @AephVeyniker 5 месяцев назад

    Everywhere I go it seems that SD checkpoints, Loras or ControlNets aren't named in any usefull or logical way.

  • @FutonGama
    @FutonGama 9 месяцев назад

    Very nice

  • @-E42-
    @-E42- 9 месяцев назад

    thanks man... !

  • @CyberPhonkMusic
    @CyberPhonkMusic 5 месяцев назад

    Is there a way to extend a video after it has been created?

  • @c0d3_m0nk3y
    @c0d3_m0nk3y 9 месяцев назад +2

    Do you only provide 1 picture and it generates 14 frames or do you provide 14 frames and it will interpolate between them?

    • @MegaGasek
      @MegaGasek 9 месяцев назад +1

      It will generate the frames from one picture. I think that if you couple it with Topaz Video AI/Handbrake you'll be able to do amazing stuff.

    • @c0d3_m0nk3y
      @c0d3_m0nk3y 9 месяцев назад +1

      @@MegaGasek Thanks for the response. So you basically have no control. Things will move the way the AI thinks they should. Too bad, I was hoping for a way to generate consistent views of an object/subject from different angles for photogrammetry. It would have been great if you could have fed it 14 viewpoints.

    • @MegaGasek
      @MegaGasek 9 месяцев назад +2

      @@c0d3_m0nk3y Yes, I know. However I think this is just the very beginning of great things to come. The way ''AI'' is evolving I think you'll have your wish in no time at all. I do think that processing power will be an issue for more elaborate things.

  • @nomanqureshi1357
    @nomanqureshi1357 8 месяцев назад

    its like installation on nuke boom

  • @pogiman
    @pogiman 9 месяцев назад +1

    what version of Comfy UI are you using? mine dont have that fancy share button

    • @mirek190
      @mirek190 9 месяцев назад

      update it

  • @sb6934
    @sb6934 9 месяцев назад

    Thanks

  • @DrSid42
    @DrSid42 9 месяцев назад +2

    ComfyUI sucks. It can't copy seed from result image when batch size is greater than 1. Like how do you people even use it ?

  • @Eric_In_SF
    @Eric_In_SF 6 месяцев назад

    ill wait for the web gui lol

  •  9 месяцев назад

    Now you can create videos with SVD! Or can you? (suddenly Jake Chudnow - Moon Men Instrumental starts to play ) :D

  • @zobbizobba2088
    @zobbizobba2088 9 месяцев назад

    Would it work on a mac mini ? considering buying one…

  • @hogstoothairsoft1967
    @hogstoothairsoft1967 8 месяцев назад

    thx for the video! tried it for a few hours, but without any other modifications it's nothing for me... (like length, prompts, maybe loras)

  • @Bob-em6kn
    @Bob-em6kn 3 месяца назад

    Do we have to install ALL the node?

  • @Stick3x
    @Stick3x 9 месяцев назад

    Does not work with M1-M2 silicon macs. Get an error.

  • @AIVideoZeppelin
    @AIVideoZeppelin 7 месяцев назад

    Hello. I can't configure it. Where are the downloaded models placed within ComlyUI? In what folder, or how are they installed?

  • @Comic_Book_Creator
    @Comic_Book_Creator 9 месяцев назад

    thank you , I make it work, so now question, how do i manage the animations, right now its like zoomin zoom out ... do there any other workflow where we can add also prompt?

  • @aguyandhisguitars435
    @aguyandhisguitars435 8 месяцев назад

    Sadly can’t get this to work with AMD gpu in windows

  • @Naaame2077
    @Naaame2077 9 месяцев назад +2

    Imagine going through all this for a 14 frame GIF.

    • @Icewind007
      @Icewind007 9 месяцев назад +1

      You are the definition of spoiled.

    • @Nightmare026
      @Nightmare026 9 месяцев назад

      cuckold

  • @Stormthedude
    @Stormthedude 9 месяцев назад

    When loading the graph, the following node types were not found:
    VHS_VideoCombine

  • @CyberPhonkMusic
    @CyberPhonkMusic 5 месяцев назад

    How do I create longer videos?

  • @Varibam
    @Varibam 9 месяцев назад

    Help me Olivio Wan Kenobi, you are my only hope...
    No matter image i feed into it, all i get is a slight zoom in. Tried cars, People, Beach, a fighting scene etc. I used the exact workflow from the link without changing any settings. I even scaled down the images in gimp to a 1024 x 576 resolution.

  • @tukanhamen
    @tukanhamen 9 месяцев назад +1

    Can't we just add text to image to the workflow then feed that image to SVD? That could be a temporary solution to text to video until we get official support right?

    • @Gh0sty.14
      @Gh0sty.14 9 месяцев назад +1

      Yeah there's a workflow that uses text to generate an image then it uses that for the video. Find the post titled "ComfyUI Update: Stable Video Diffusion on 8GB vram with 25 frames and more" in the SD subreddit and the link they provide includes the workflow.

  • @krakenunbound
    @krakenunbound Месяц назад

    Is there any specific PC requirements to run this?

  • @yakuza25rus
    @yakuza25rus 9 месяцев назад +1

    what parameter changes the length of the video?

  • @aLx-gaming-official
    @aLx-gaming-official 8 месяцев назад

    I tried to do the steps you have shown, but it seems like i run into some errors...
    If I try to do the "Update All" it gives me the following error:
    Failed to update ComfyUI or several extensions / TypeError: Cannot set properties of null (setting 'onclick')
    If i load the json config it says:
    When loading the graph, the following node types were not found:
    RIFE VFI
    VHS_VideoCombine
    Seed (rgthree)
    Nodes that have failed to load will show as red on the graph.
    How can i fix this?

  • @Flightrovx
    @Flightrovx 9 месяцев назад

    When loading the graph, the following node types were not found:
    RIFE VFI
    Seed (rgthree)
    VHS_VideoCombine
    Nodes that have failed to load will show as red on the graph.

    • @tybost
      @tybost 8 месяцев назад

      You gotta click on the one already there and change it to RIFE 47 or 49.

  • @johanskaneby
    @johanskaneby 6 месяцев назад

    Hey - where is the ComyUI instuction? you said you where going to link?

  • @user-xx8uu6tm2e
    @user-xx8uu6tm2e 9 месяцев назад

    Why I do not have access to the preview video? I need to click on "Queue Prompt" and render it to see

  • @xXxDisplayNamexXx
    @xXxDisplayNamexXx 9 месяцев назад

    I downloaded everything correctly, but my ckpt isn't loading in, despite it being in the right spot. All it gives me is an undefined/null and no option to select any models. I have the paths set up correctly in the YAML file, so I'm not totally sure what the issue is