New Image2Video. Stable Video Diffusion 1.1 Tutorial.
HTML-код
- Опубликовано: 12 фев 2024
- SVD 1.1 Tutorial in ComfyUI. How to easily create video from an image through image2video.
Detailed text & image guide for Patreon subscribers here: / turn-with-svd-1-98390700
Download SVD model here huggingface.co/stabilityai/st...
Basic ComfyUI Workflow comfyanonymous.github.io/Comf...
How to install ComfyUI • How to install and use...
Prompt styles for Stable diffusion a1111, Comfy & Vlad/SD.Next: / sebs-hilis-79649068
Get early access to videos and help me, support me on Patreon / sebastiankamph
Chat with me in our community discord: / discord
My Weekly AI Art Challenges • Let's AI Paint - Weekl...
My Stable diffusion workflow to Perfect Images • Revealing my Workflow ...
ControlNet tutorial and install guide • NEW ControlNet for Sta...
Famous Scenes Remade by ControlNet AI • Famous Scenes Remade b... - Хобби
Detailed text & image guide for Patreon subscribers here: www.patreon.com/posts/turn-with-svd-1-98390700
pls add the Forge URL, peace
i wonder if we will be able to select an area that moves in a direction instead of the whole image moving like a camera, this way i could make water move for 4 seconds and dump it into video AI and slow the image down and increase the frames per second and also pump it up to HD cool thing with Video AI is that it fills in the gaps in turn you get longer videos smooth and in slowmotion.
Excellent video - I will have to try this out tonight!
And today came Sora and boom! 💥
Let's see what the SD team and the others do. Let the competition increase for our benefit 💪🏻😅
😂😂😂
Great stuff, more ruined fooocus tutorials please. Im also in love with it
I do like it, but there's not a lot to cover on it :D
Thanks for this. Question. In the Manager I see you have a process running view, is that a plug in?
Crystools! Made a video on it
thanks for sharing!!!
Waiting for someone to train commercially free models
As a Newbie in this space may I ask what’s the downpoint of this? I mean that you can’t sell your videos made using their model right? Or there’s more in their strictly agreement?
My work flow is not similar as him, do you have the same or different, althought i've perfectly installed everything but still having errors, what can i do in this ?
Atm is it only capable of outputting a 1sec clip?
What I have noticed about all these image2vid is the length seems to be the problem without having to do any editing, I think I'll wait to mess with these when the length is no longer a issue
Can you write text prompts for the videos? And can you extend the videos from the last frame and concatenate?
Not by default in SVD. I have seen some workflows that expand on svd with prompting but then it's just running the output with denoising.
comfy hasn't worked for half a year for me and I kinda gave up on it. just super finicky to get any and all of its many parts working in tandem. I kinda hope a lot of this stuff comes to fooocus. Just soooo much easier.
I understand that fully!
My work flow is not similar as him, do you have the same or different, althought i've perfectly installed everything but still having errors, what can i do in this ?
I’m having fun with it. Stumbled upon a trick to use LCM acceleration too. So far, just boils down to seed luck
Okay i thought i was crazy lol I had gotten this working one day too but was dumb and lost the JSON file for it and never could get it working again
I can't find the files to Download could you please let me know where is it ?
Could someone explaine me, why i don´t have the "video combine" I mean the last Window where I can see the clip?? Thx for help
All very clear, except one thing, in what menu is the node "Video Combine"?
VHS - Video helper suite. Custom node.
love it! one question, I like how it moves my animation but the faces are distorted, What parameters can I change or play?
For SVD, none. You'd need a much more advanced workflow that re-renders the faces. It's all about rendering lots and being lucky.
if doing this in A1111 - which file would i put the following to work with low vram?
pipe.enable_model_cpu_offload()
pipe.unet.enable_forward_chunking()
frames = pipe(image, decode_chunk_size=2, generator=generator, num_frames=25).frames[0]
What do you mean?
bro i'm havign eror but don't know coding could you pleaes help me.
Thank you
You're welcome!
Thank you. Text prompt is not possible?
hello sir, can you give me the tutor how to creat video combine node on there ?
Am I right with the assumption that you have 0 control over the camera movement? The workflow simply decides what the "camera" has to do by analyzing the input?
Love your videos sebastian - Always to the point and thoroughly explained.
Do you have any plans to cover the Forge WebUI?
Thank you! I just might. What do you think about it?
For me personally, the speed at which it generates SDXL images on my 2080 is so much faster than in a1111.
Aside from that, I'm pretty new to all this and am still trying to figure out how to install it so it works with all my existing checkpoints/LoRas.@@sebastiankamph
My work flow is not similar as him, do you have the same or different, althought i've perfectly installed everything but still having errors, what can i do in this ?
Could you please do a tutorial for automatic1111?
can u tell name of extension this info system workin at queue prompt and loading % generation?
Yeah would like to know too!
Does anyone know what the VRAM requirements are for these models?
mnimum 4 gb
@@ijayraj Ok thanks. I read somewhere that it was 8, 4 seems a bit low to me. I have a 2070 so I have 8 anyway so will give it a go at some point.
@@justanothernobody7142 sure good luck bro
AI videos are awesome and absolutely frightening. My god how far they've come
Getting better each new version, just like in the early days of image generation :)
Sebastian, have you tried to replicate the image enhancement of Krea or Magnific? I spoke with the CEO of Magnific and he told me they have SDXL running under the hood…..I was just curious to know why no one has ever tried to do it……anyways, thanks for the video!
What you mean is they are replicating image enhancements of stable diffusion ;). It's just SD in a marketing package. No magic.
My work flow is not similar as him, do you have the same or different, althought i've perfectly installed everything but still having errors, what can i do in this ?
Can this be used with A1111 locally???
yes
Can you make an video AMD installation of SD
I don't have an amd card, and I don't want to fake it :D
can I use this for commercials? on yt?
Go for it
hi. is there a way to tell it what we want ? or control the animation in any way?
No and thats the main problem, at this rate it'll be years before anyone has any fine level of control over what comes out of text to video. To me it looks like this tech is pretty much dead on arrival, and I dont see it getting better anytime soon.
@aegisgfx it will improve at a rapid pace just like all the other AI tech.
@@Wobbothe3rd ya Ive been waiting for that, seen no improvements so far
@@aegisgfx And then Sora, Vidu and Google's video model got announced.
I hate it when an "upgrade" isn't a clear upgrade. Thank you.
can we use prompt for change?
Sadly not with this.
I’m guessing you can do text-to-image and then chain that to this workflow to achieve something similar. But yeah it’s just Image to Video for now.
Doesnt work well at all for me.. used your defaults and I get a model that looks deformed in animation..
Is possible to make longer video?? ~10 seconds
So i can use this to bring my images to life?
Pretty much
while the hamburger did look good it wasn't realistic to the scene unless u where spinning the camera and the hamburger was on a spinning plate - its to bad they can't give us access to a keyword system to control motions for foreground and background - seems like if they did simulations on the results by injecting words into the mix they could find the nodal points that generate consistent action within the results
nice ! is this possible with automatic1111?
No, but Forge!
I did not see where you put the prompts.
There are no prompts for svd
@sebastiankamph I've been messing about with Forge...it is FAST. Look forward to your video mate.
Great tip! Also, hi again
It's already good enough to replace entire production teams for commercials...
My work flow is not similar as him, do you have the same or different, althought i've perfectly installed everything but still having errors, what can i do in this ?
What is sowtware name?
He's showing Stable Diffusion in ComfyUI.
To be accurate it's Stable Video Diffusion 1.1
Фэнкс!
I'm the 666th person to have liked this video :D may it bring you luck
Sorry, but I'm having this error:
Error occurred when executing LoadImage:
cannot identify image file 'H:\\ComfyUI\\ComfyUI_windows_portable\\ComfyUI\\input\\image_sample_1024x576_V01.png'
File "H:\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "H:\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "H:\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "H:\ComfyUI\ComfyUI_windows_portable\ComfyUI
odes.py", line 1459, in load_image
img = Image.open(image_path)
^^^^^^^^^^^^^^^^^^^^^^
File "H:\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\PIL\Image.py", line 3339, in open
raise UnidentifiedImageError(msg)
and did you know the the 'gullible' isn't even in the Oxford English Dictionary? 🤣
Hi Sebastan very helpfull upload thanks, but please when are you going to do a help vid all about the new Forge UI - 75% faster than Automatic 1111, and also will the new Forge work on AMD gpus all other useless youtube uploaders seem to all forget about all us, usesers, who don't wont to spend or can not spend lots of money on new hardware just to test out for example the new forge, hope you understand thanks,
your channel has always been the best for all my AI Stable Diffusion help thanks, 💯💯💯👍👍👍
Good suggestion! And thank you :)
Thanks Sebastian you're the one who got me into A.I. art But A1111 is still a bit slow on my AMD, would be fantastic for all us penny less AMD users if we all could start using Forge. @@sebastiankamph
My work flow is not similar as him, do you have the same or different, althought i've perfectly installed everything but still having errors, what can i do in this ?
The pace of text to video tech has been really slow compared the text to image stuff. I have to say all the txt-video solutions I have seen are incredibly unremarkable, especially compared to the txt-image models which are all phenomenal. Unless the ball gets rolling on txt-video pretty fast, Im calling this technology dead on arrival.
Image stuff that isn't the new dall e 3 / GPT tech has been practically stale for a year now. Nothing really has improved, and with the release of SDXL it took a bit of a hit because training your own loras and shit for it is as big a pain as it was in the early days of training for SD 1.5
Slower, yes. And video consistency is also much harder to achieve than a still image. Looking back a year from now, I'm sure the comparison will be breathtaking :)
@bladechild2449 I thought it was just me. I took a break for a year and came back and I have been struggling to find anything thats really new.. 2022-early23 was moving so fast.
It's still not great
Great info man, thanks for keeping us upated about new stuffm i followed the whole video but still getting this erroe
This one in command prompt
"""
0%| | 0/20 [00:02
You don't have enough video/gpu memory. It says out of memory there sadly.
@@sebastiankamph I've 4 GB RTX 3050, can't we solve this error?
@@sebastiankamph i wants to use this feature 😔
You can use a cloud solution like Thinkdiffusion.com@@ijayraj
@@sebastiankamph will try it, thanks man, thanks you very much for the support.