Stable Video Diffusion Tutorial: Mastering SVD in Forge UI

Поделиться
HTML-код
  • Опубликовано: 27 сен 2024

Комментарии • 136

  • @pixaroma
    @pixaroma  6 месяцев назад +9

    If your new version doesnt show up, just install another version of forge on different folder and go back to the version from the video that has that tab, I explain it here ruclips.net/video/BFSDsMz_uE0/видео.html
    I got the checkpoint model for SVD from here
    civitai.com/models/207992/stable-video-diffusion-svd
    Remember it can generate at 1024x576px or 576x1024px, you can use bigger images when you upload but try to keep them same size.
    It can generate 4 seconds video, you probably can get the last frame and get a continuation of it for another 4 seconds and so on.
    You need like 6-8 GB of VRAM.
    I used Stable Diffusion Forge UI
    If you want to learn more about AI or have questions join my facebook group
    facebook.com/groups/pixaromacommunity

    • @nietzchan
      @nietzchan 5 месяцев назад

      Still can't get it to work in Forge. I don't know what I'm doing wrong, does it have to use SD 1.5 checkpoints and VAE first?
      Tried using Animagine XL 3.1 with XL VAE for the initial image feed and running SVD just sent me to BSOD

    • @pixaroma
      @pixaroma  5 месяцев назад +1

      @@nietzchan it need a lot of video ram so many that why is crashing. It just need a photo on that that size 1024*576px and work with that svd model to generate, if it crash probably your video card can not handle it

    • @nietzchan
      @nietzchan 5 месяцев назад

      @@pixaroma I think my forge installation have memory management issues, or probably the something wrong with the unet setting. I managed to run it once when I'm just running SVD, but the second time it just crashes.
      I'm currently using 12gb 3060 and 16gb ram, and I think the bottleneck is actually the ram when Forge is automatically load models to ram on start.
      I want to try offload from vram options and see if it helps.

    • @nietzchan
      @nietzchan 5 месяцев назад

      Confirmed, I need more vram. Tried to use the offload models from vram args so SVD have plenty of room in GPU. I'm using RTX 3060 12gb despite SVD only uses around 8gb of vram the Forge backend still have the image diffuser model in vram, resulting in OOM on my GPU. the offload args works, but instead it didn't offload the SVD models once you generate video. So I'm back to square one after each generation. Oh well.

    • @pixaroma
      @pixaroma  5 месяцев назад

      @@nietzchan sorry it didnt work, usually they make it less vram consuming in time, so in a few months maybe we have better models and systems

  • @CornPMV
    @CornPMV 6 месяцев назад +4

    Nice tutorial; I enjoy the animatediff extension I get really good and consistent results using it!

    • @iangillan1296
      @iangillan1296 3 месяца назад

      where do you use it? In comfy?
      I installed AD inside Forge UI, and there is now any changes, AD didn't appear

  • @PredictAnythingSoftware
    @PredictAnythingSoftware 6 месяцев назад +1

    Thank you for the video using forge. Please make more video using forge since this is the only gui I can run SDLX model on my low end RTX 2060 6VRam PC.

    • @pixaroma
      @pixaroma  6 месяцев назад

      Sure, i also have an older computer with same video card and with forge i managed to get it to work, but even on my new rtx4090 seems to work better, so for a while i will do only forge, unless automatic 1111 ad something that forge can't do :)

  • @SantanuProductions
    @SantanuProductions 10 дней назад

    I thought I would get past those expensive subsciption models of image2video AI. Now I got caught into Topaz for the hi-res fix

    • @pixaroma
      @pixaroma  10 дней назад

      I use topaz video ai for video upscaling

  • @baheth3elmy16
    @baheth3elmy16 6 месяцев назад +1

    Thanks! Another good tutorial video!

    • @pixaroma
      @pixaroma  6 месяцев назад

      Thanks :) glad you like it

  • @FranzGorask
    @FranzGorask 2 месяца назад

    very good content, i apreciate it. keep it like this

  • @cruz2480
    @cruz2480 4 месяца назад

    Great video, subscribed. Keep making great content.

  • @fishpickles1377
    @fishpickles1377 3 месяца назад

    Very cool! Wish i had the hardware to run it!

  • @DrDaab
    @DrDaab 5 месяцев назад

    Great, thanks a lot.

  • @Robertinosro
    @Robertinosro 5 месяцев назад

    cool stuff. thank you

  • @Ollegruss_Music
    @Ollegruss_Music 3 месяца назад

    Thanks!

  • @robroufla
    @robroufla 6 месяцев назад

    Thanks ! Yes shame about the lack of settings for the SVD output path. It'd be great to have camera movement and prompt guidance like on Deforum but with consistence of SVD. Soon I'm sure ;)

  • @Rithman
    @Rithman 6 дней назад

    I Don't have SVD TAB and my Forge is updated (run the Update.bat and it said "Already updated"). I'm on the main branch, and other branches (like dev) are not there if i do "git fetch" -> "git branch". What am I doing wrong? Help plz...

    • @pixaroma
      @pixaroma  6 дней назад +1

      The latest version doesn't have it, check this video i talk about how you can get back to older version ruclips.net/video/BFSDsMz_uE0/видео.htmlsi=lITYLYk1millsWY-

  • @zimxh
    @zimxh 25 дней назад

    would you know why im getting this error whenever i try to generate?
    TypeError: KSamplerX0Inpaint.__init__() missing 1 required positional argument: 'sigmas'

    • @pixaroma
      @pixaroma  25 дней назад

      They keep updating the forge some things work and others don't , was very unstable lately, you can see if anyone else have the same error or you can report the issue on their page github.com/lllyasviel/stable-diffusion-webui-forge/issues

  • @GreenNicole
    @GreenNicole 18 дней назад

    Clark Elizabeth Young Amy White Margaret

    • @pixaroma
      @pixaroma  18 дней назад

      Sherlock Holmes Watson Lestrade Moriarty Baskerville Irene 😂

  • @FantasyArtworkAI
    @FantasyArtworkAI 4 месяца назад

    Mine creates a video in the folder: \Stable Diffusion Forge\webui\output\svd which is the same output folder where you have img2img and txt2img at.

    • @pixaroma
      @pixaroma  4 месяца назад

      I didn't use it for a while but i think i put in settings all the paths to lead to the same folder

  • @onlineispections
    @onlineispections 2 месяца назад

    Which stable diffusion model firge ui download because I don't see the svd option. Hrazie

    • @pixaroma
      @pixaroma  2 месяца назад

      I had one that started with commit that start with 29

    • @onlineispections
      @onlineispections Месяц назад

      Do you have a link to download the one as file.bat?​@@pixaroma

  • @levagicien9904
    @levagicien9904 23 дня назад

    Did SVD has been removed from SD forge ?

    • @pixaroma
      @pixaroma  23 дня назад

      yes, but you can still install an older version that has it see how I explained it here ruclips.net/video/BFSDsMz_uE0/видео.htmlsi=ygQEkbZg41I8aiYS&t=986

  • @anon3253
    @anon3253 4 месяца назад

    I'm trying to utilize SVD on my GTX 1660 Ti, but it doesn't seem to be working. I'm encountering error messages.

    • @pixaroma
      @pixaroma  4 месяца назад

      Maybe I don't have enough vram your video card , not sure, for me it worked with those settings

  • @bekosh248
    @bekosh248 4 месяца назад

    Great video! Do you know if forge or any other UI like this has the capability of inpainting a certain section of your image, so that only that inpainted portion gets animated?

    • @pixaroma
      @pixaroma  4 месяца назад

      I don't know any, only some online platforms saw it has some motion brush, but didn't saw any to have in stable diffusion yet

    • @YoshikiBeats
      @YoshikiBeats 3 месяца назад

      Only with confy ui

  • @lorenzodecarlo9125
    @lorenzodecarlo9125 4 месяца назад

    thank you for the video! I've not svd folder on webui > models. Why?

    • @pixaroma
      @pixaroma  4 месяца назад

      it should be there since you installed forge ui, maybe you have other UI or something like A1111? not sure what to say

  • @woodtech1951
    @woodtech1951 2 месяца назад

    Great video, question for you, my GPU has 24 GB but the software is only showing the dedicated 8 GB, trying to figure out how to make sure it is utilizing all 24 GB. I tried more frames and Task Manager shows that it taps into the shared GPU so maybe I just am over analyzing it. (RTX 3060Ti for reference)

    • @pixaroma
      @pixaroma  2 месяца назад +1

      Depending on what you give it to do it will use more VRAM don't worry, if I have big images or doing video it will need more vram so it will use more. It takes what it needs in that moment. I have 24 gb of vram it takes like 4-5 sec to generate a 1024px image

    • @woodtech1951
      @woodtech1951 2 месяца назад

      @@pixaroma thanks for the quick reply! So does that mean I could possibly increase the WxH of the output video?

    • @pixaroma
      @pixaroma  2 месяца назад

      @@woodtech1951 unfortunately that model only work with that size, so if you increase will not work. Is better you just use an upscaler after like topaz video ai or something. For images you can increase but for that specific video model you can not. Is an old model and didnt find a better version yet :( you can take a look at luma ai for image to video, is better then this, you have like 5 videos free, or runwayml version3, i rarely use this model anymore because it doesnt have to much motion. I am playing more with comfyui and there I have more options for animation, as i learn more about it I am doing more tutorials for it

  • @onlineispections
    @onlineispections 2 месяца назад

    Hello. I installed stble diffusion, I downloaded stableVideoDiffusion_img2vidXt11.safetensors, inserted in SVD, but I don't see the SVD option on the home page, why?

    • @pixaroma
      @pixaroma  2 месяца назад

      Not sure what could be the cause, i didn't use it in the last 5 months. Are you using forge or other version,mayne you have automatic1111 instead of forge ui, or maybe you updated to another version that doesn't have that tab. Usually svd tab should be already there even if you didn't added the model

  • @mert5809
    @mert5809 3 месяца назад

    Thanks for the video. I use Comfy, although I generate with the same settings as you show, my results are very noisy, it has grain all over it. Is it related with Comfy itself, I don't know.

    • @pixaroma
      @pixaroma  3 месяца назад +1

      Not sure, I will play more in the coming months with comfyui

    • @mert5809
      @mert5809 3 месяца назад

      @@pixaroma I was using tiled upscale for the image, I realize it affects svd output quality in a bad way. Just leaving a little tip for others.

  • @olternaut
    @olternaut Месяц назад

    For some reason I don't see the Train, svd, or z123 tabs in my forge ui install. I'm sure I have the latest install. Anybody know what the problem is?

    • @pixaroma
      @pixaroma  Месяц назад +1

      Latest install probably is not the stable version that i use, i have a video on the channel with downgrade or update forge, i use the version with commit that start with 29

    • @olternaut
      @olternaut Месяц назад

      @@pixaroma I'll have to look for it. Then again, even though auto1111 is slower, it seems at least to be more stable. When ForgeUI gets their act together I'll check it out again.

    • @pixaroma
      @pixaroma  Месяц назад

      @@olternaut well the problem is that was not updated officially and all the new version will might brake your forge. That why i switched to comfyui. A1111 is a little slower but also i have to wait for updates when something new appear like sd3 and so on, and in comfyui i have it next day. ruclips.net/video/RZJJ_ZrHOc0/видео.html

    • @olternaut
      @olternaut Месяц назад

      @@pixaroma I hear what you're saying. But comfy seems to be needlessly complex. It's like Dad yelling at me to do my homework and I begrudgingly get to it after dragging my feet. lol

    • @pixaroma
      @pixaroma  Месяц назад

      check this video How to Install Forge UI & FLUX Models: The Ultimate Guide
      ruclips.net/video/BFSDsMz_uE0/видео.html

  • @wayneout
    @wayneout 5 месяцев назад

    I get the error message "attribute error; none type object has not attribute set manual cast" I upload an image from my computer. I don't know how to correct this error. Thank you

    • @pixaroma
      @pixaroma  5 месяцев назад

      Did you use exact same settings? Also make sure the image size is the same like in the video, if not resize it. There are some bugs when you are using a width and height that is not divisible with 64 so maybe that can fix it

  • @onlineispections
    @onlineispections Месяц назад

    HI. In the downloaded template of dtable diffusion, there is no SVD option, what can I do?

    • @pixaroma
      @pixaroma  Месяц назад

      Did you tried a stable version ruclips.net/video/RZJJ_ZrHOc0/видео.htmlsi=kLp0fpY5boKvYImP

    • @onlineispections
      @onlineispections Месяц назад

      @@pixaroma hello, I did ungrade to 29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7 but it does not have the VSD in the interface. do you know another method to download a stable diffusion with VSD with comango.git?

  • @sircasino614
    @sircasino614 5 месяцев назад

    So you can't have a prompt for "how" you want it to animate or move?

    • @pixaroma
      @pixaroma  5 месяцев назад

      no, is all based on the image, maybe they fix that in the future

  • @kridadkool1319
    @kridadkool1319 5 месяцев назад

    fam I wanna know about that A.i voice DOPE!! Vid

    • @pixaroma
      @pixaroma  5 месяцев назад

      I am using VoiceAir ai

    • @Kevlord22
      @Kevlord22 5 месяцев назад

      Its good, since until i read this, i had no idea its was ai voice. pretty cool.

  • @WizzardofOdds
    @WizzardofOdds 4 месяца назад

    I seem to get a bit closer to animation using this. I have tried the animatediff but all I get is a still image. When I click generate with the SVD module I can see a progression bar but then I get Error. Is this because I did the one click download of Forge as that may be the issue with animatediff, or is it possible that I just don't have the right amount of vram. I have NVIDIA GeForce GTX 980 Ti

    • @pixaroma
      @pixaroma  4 месяца назад

      I think you need more then 6gb of vram, usually rtx cards with 8gb or more work better. I saw another comment saying that 6bb gave an error

    • @WizzardofOdds
      @WizzardofOdds 4 месяца назад

      @@pixaroma Thanks, I guess I need an upgrade. Your videos are very helpful.

  • @justlivedekhing
    @justlivedekhing 4 месяца назад

    Brother i got this error any help please:
    raise FFExecutableNotFoundError(
    ffmpy.FFExecutableNotFoundError: Executable 'ffprobe' not found

    • @pixaroma
      @pixaroma  4 месяца назад

      Is possible to need to install ffmpeg , i didn't had that error yet

  • @k_y_l_3
    @k_y_l_3 6 месяцев назад

    Is anyone else having an issue with "RuntimeError: Conv3D is not supported on MPS"?
    Some people on github said it might be something to do with the pytorch version, but I think mine is the right version.

    • @pixaroma
      @pixaroma  6 месяцев назад

      I didnt got that error, but from what I found online that seems to be related to the macOS and apple processors. So what seems to be the problem with the error you're encountering is due to PyTorch's Metal Performance Shaders (MPS) backend not supporting the Conv3D operation on Apple Silicon (M1, M2, etc.). I am on windows so not sure what that does means, but maybe it has more sense to you, so probably pytorch doesnt support the apple proccesor how it should, yeah that suggest updating the pytorch but that will work only if they included that support for processor.

  • @rakibislam6918
    @rakibislam6918 25 дней назад

    how to generate 8/10 seocnd video? sd or comfy

    • @pixaroma
      @pixaroma  25 дней назад

      I didn't use svd for a few months now new version appears. And for comfyui i will do a video when i get to that part, I still have more to show on the image before i get to the video

    • @rakibislam6918
      @rakibislam6918 24 дня назад

      @@pixaroma any video generate model 8/10 second video generated ?? open source

    • @pixaroma
      @pixaroma  24 дня назад

      I don't know any that does that long. Look for CogVideoX that is the last model for video i know

  • @caucho6.6.86
    @caucho6.6.86 5 месяцев назад

    how can add a prompt to video, if i want make specific videos?

    • @pixaroma
      @pixaroma  5 месяцев назад +1

      this one only works with images, so you can generate a text to image first then use that image to make the video, doesnt know text to directly video

  • @ArtistrystoriesUnleashed45
    @ArtistrystoriesUnleashed45 Месяц назад

    how to install it in forge ui?

    • @pixaroma
      @pixaroma  Месяц назад

      I just revert it to a older version that has that svd, you can install it on separate folder and just have that older version ruclips.net/video/BFSDsMz_uE0/видео.htmlsi=v9zBYtWpLJuAfidm&t=984
      i created a bat file that go back to that version see in the video
      @echo off
      set PATH=%DIR%\git\bin;%PATH%
      git -C "%~dp0webui" checkout 29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7
      pause

  • @fixelheimer3726
    @fixelheimer3726 6 месяцев назад

    the snow overlays were not created with sd I guess?

    • @pixaroma
      @pixaroma  6 месяцев назад

      No, it is just a snow overlay video

  • @KevlarMike
    @KevlarMike 4 месяца назад

    299 one time payment for topaz but at least it’s a onetime payment ❤

    • @pixaroma
      @pixaroma  4 месяца назад +1

      I think I got it on black friday it was cheaper then :)

  • @idolgalaxy69
    @idolgalaxy69 6 месяцев назад

    can we do batch render?

    • @pixaroma
      @pixaroma  6 месяцев назад +1

      I didn't find an option for video, so I don't think it is possible or I didn't find it.

    • @idolgalaxy69
      @idolgalaxy69 6 месяцев назад

      @@pixaromathanks~ you tutorial is great and clear~

  • @manolomaru
    @manolomaru 5 месяцев назад

    ✨👌😎🙂😎👍✨

  • @lowserver2
    @lowserver2 6 месяцев назад

    still ran out of memory with these exact settings on 8gb vram

    • @pixaroma
      @pixaroma  6 месяцев назад

      I don't have 8gb to test it, but online said it could work, sorry to hear it doesn't work :(

    • @lowserver2
      @lowserver2 6 месяцев назад

      sorry, i tried again after restarting forge and it did work. However, i cannot get good results yet. It mostly wants to do panning and the stuff outside the original pic becomes all distorted, so idk.@@pixaroma

    • @pixaroma
      @pixaroma  6 месяцев назад

      Try different seeds until you get one that works, unfortunately we dont have control, hope in future models they fix that

    • @pixaroma
      @pixaroma  6 месяцев назад

      @@lowserver2 try also using images that doesnt touch the edge, like is not cropped, so if you have a portrait make sure it has some space, then it can rotate that without distorting, if is on edge it tries to extend that and can fail

  • @JarppaGuru
    @JarppaGuru 5 месяцев назад

    3:08 yes seed like million variable. some complete grap. tells what AI actually do. programmed todo. not any intelligence.
    it will not create rabbit unless trained data has rabbit and it will be same rabbit for those prompt words.
    it works good with this robot bcos training data have many images from this robot.
    it did not work good for picture of man and face swapped myself. background move if find seed but "me" not change at all LOL
    got so bored first attempt worked but rest did not lol lol all that waiting to get grap!
    cant even choose render frame 7 without make video
    or render mulltip images using different seeds so can choose.
    seed is like motion from 1 trained clip. it will do exact that if your image match(trained todo no AI)
    seed 1 could me pan left seed 2 could be pan right ..etc
    what we learned? AI result need be checked. dont make skynet and plug it to red button(it will push red button if it programmed todo it) but if human check result and human push red button not AI. then we not have skynet. just AI (tool)(automated instructions like i say)

    • @pixaroma
      @pixaroma  5 месяцев назад

      Well in this case since is based on a image, the image is the variable, you can have infinite unique images for input. And yeah is not a Ai that we see in the movie is a trained model that do what is trained and knows only that for now.

  • @TheMaxvin
    @TheMaxvin 6 месяцев назад +1

    SVD is so boring, it`s background light motion basically.

    • @pixaroma
      @pixaroma  6 месяцев назад

      Yeah, definitely needs more work

    • @ranjithgaddhe9818
      @ranjithgaddhe9818 6 месяцев назад

      SVD has better orbit camera motion I have tried
      Other models not even making tracking well .. but SVD still need more training for better results

  • @retikulum
    @retikulum 3 месяца назад

    Such a piece of crap extension. I create one Video, VRAM gehts filled, Video is finished, VRAM stays full -> OOM when trying to create the next video. So, restarting SD after every video creation. How stupid.

    • @pixaroma
      @pixaroma  3 месяца назад

      It needs a lot of vram or you can't do much with it

    • @retikulum
      @retikulum 3 месяца назад

      @@pixaroma Huh? No, like I said: 1st video works, 2nd video OOM because the vram is still full from the first video. It doesn't get flushed. pytorch keeps the vram reserved.

    • @pixaroma
      @pixaroma  3 месяца назад

      Yeah is possible to not work how it should with ram, but if you have more it never gets full so it still works, never crashed on 24vram, but still seems to be an old version and didn't saw a new one that work for stable diffusion so i keep using that one, i am waiting for Sora or alternatives

  • @onlineispections
    @onlineispections 2 месяца назад

    Hello. I installed stble diffusion, I downloaded stableVideoDiffusion_img2vidXt11.safetensors, inserted in SVD, but I don't see the SVD option on the home page, why?

    • @pixaroma
      @pixaroma  2 месяца назад

      I have this version: Commit hash: 29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7 you can see here how you can switch between different versions ruclips.net/video/RZJJ_ZrHOc0/видео.html

  • @denizkendirci
    @denizkendirci 14 дней назад

    i installed forge by pinokio, i don''t have a svd tab in ui.
    how to fix it, anybody?

    • @pixaroma
      @pixaroma  14 дней назад +1

      Check pinned comment

    • @denizkendirci
      @denizkendirci 14 дней назад

      @@pixaroma thanks so much, sorry for not paying attention to pinned one.

  • @SumoBundle
    @SumoBundle 6 месяцев назад +2

    Thank you for this tutorial

  • @MelizzanoDaquila
    @MelizzanoDaquila Месяц назад +1

    my Stable Diffusion Forge does not have SVD
    I tried downloading again and the SVD still didn't appear. :(

    • @pixaroma
      @pixaroma  Месяц назад

      Did you try the last stable version because the new updates mess up a lot of things ruclips.net/video/RZJJ_ZrHOc0/видео.htmlsi=1d9jphW2PuLB0RK6

    • @pixaroma
      @pixaroma  Месяц назад

      check this video How to Install Forge UI & FLUX Models: The Ultimate Guide
      ruclips.net/video/BFSDsMz_uE0/видео.html

  • @twd2
    @twd2 Месяц назад

    my Forge UI does not have SVD tab !!!

    • @pixaroma
      @pixaroma  Месяц назад +1

      The latest version doesn't have it anymore only if you put an older version, the version i used then was on wirh a commit that start with 29

    • @pixaroma
      @pixaroma  Месяц назад +1

      check this video How to Install Forge UI & FLUX Models: The Ultimate Guide
      ruclips.net/video/BFSDsMz_uE0/видео.html

    • @twd2
      @twd2 Месяц назад

      great thanks 😍!!!

  • @CsokaErno
    @CsokaErno Месяц назад

    SVD doesn't come up to me.

    • @pixaroma
      @pixaroma  Месяц назад +1

      Check the latest video the one with forge and flux from the playlist, i use an older version that had a svd tab, the new version doesn't have it yet. So you can go back to that version to get the svd tab but it will not have flux and new stuff the new version have

    • @CsokaErno
      @CsokaErno Месяц назад

      Thank you.

  • @MisterWealth
    @MisterWealth 4 месяца назад

    How do websites like leonardo make it so it looks like the wings on a fly are flapping for example? I'm having a hard time generating a high quality video like that from svd its super grainy

    • @pixaroma
      @pixaroma  4 месяца назад

      not sure what kind of models they are using, probably if you generate a lot of them some of them would have more interesting movements, other AI I saw it have brushes control that paint and tell what to move in the image, so have more control. Or like with SORA when will be reclassed with prompt that can tell what to do

  • @dziku2222
    @dziku2222 4 месяца назад

    Doesn't work for me, animated images are just being elongated or squished with some corruptions, instead of those cool animations you've showed. I use your dimensions and a model from link. Why?

    • @pixaroma
      @pixaroma  4 месяца назад +1

      Not sure what to say maybe they changed something since i made the tutorial, if that happens with every image you use i cannot find an explanation

    • @dziku2222
      @dziku2222 4 месяца назад

      @@pixaroma Sorry to bother, but it looks really interesting and I would like to get it running - maybe the cause of error is simple for someone far more experienced than me.
      I've discovered that it works normally when I'm using baseline realistic visions model that comes together with ForgeUI - but not when I'm using something generated with old SD1.5 models like abyssorangemix

  • @snatvb
    @snatvb 6 месяцев назад

    the worse thing that I can't really control it :(
    would be greate if I could add prompt, masks and etc, like in different SD tools

    • @pixaroma
      @pixaroma  6 месяцев назад

      Yeah I understand, hope they improve it in the future, now it is all random and needs a lot of tries to get something nice. But 2 years ago image generators were basic, so probably video get better, just needs time

    • @snatvb
      @snatvb 6 месяцев назад +1

      @@pixaroma yep, I agree :)

  • @RenoRivsan
    @RenoRivsan 5 месяцев назад

    does the checkpoitn matter??

    • @pixaroma
      @pixaroma  5 месяцев назад

      I think so, this one works with this settings but others might have other recommended settings

  • @makadi86
    @makadi86 6 месяцев назад

    is this the best SVD or there are other recommended models we can try

    • @pixaroma
      @pixaroma  6 месяцев назад +1

      for stable video diffusion I didnt find a better one, stability ai released just one model for video, compared to the image model that released more then one

  • @richctv
    @richctv 6 месяцев назад

    Awesome tutorial. Keep up the great work

    • @pixaroma
      @pixaroma  6 месяцев назад

      thank you :)

  • @UmarandSaqib
    @UmarandSaqib 6 месяцев назад

    Nice one!

  • @sb6934
    @sb6934 6 месяцев назад

    Thanks!