Stable Diffusion Video (SVD) over 24 frames test
HTML-код
- Опубликовано: 26 окт 2024
- In this test, i used a random input image with the ComfyUI below 8 GBVram Implementation, at 24 frames, then took the last frame of the generation and re-inputted as the initial frame, generating another 24 frames, i repeated this 5 times, there is a slight loss of detail with this method, and would most likely need an upscaling process on the last frame for increased quality.
Wow, this is scary good... and it's all happening so fast. 😍
When it gets volumetric these solutions run into problems. These techniques can get "dangerous" only if breakthrough in 3d reconstruction happens. Not so easy with arbitrary video feeds. In nature the solution for 3d reconstruction is stereoscopic vision. For every "frame" there's enough data to reconstruct depth when you have stereo feed, so you get 3d representation of environment in your brain immediately. Here they need to infer 3d from frame to frame transitions which are so arbitrary in each video. I'd imagine this solution would need much more powerful "brain" compared to what organisms have (to build coherent 3d sequences).
Looking good
But you can still tell that it is artificial, the picture looks blurred when there is movement. The eyes also look blurred, as if they are sliding back and forth on the skin.
Dude, stable diffusion animation was just released. Whst domxou expect? Stanle diffusion itself isn't even that old. This is already creepy af. In a few years this will look way different.
@@beatemero6718 And a few years later, software like Stable Diffusion will be banned because there will always be assholes who will abuse it for criminal activities. 😒
Can you make her talk though? I'm working on AI spokespersons and I'd like to pair a talking AI image with voiceover audio generated by elevenlabs
We are still in the very early stages of AI. Imagine its capabilities in 10 years' time
We’re getting closer, do a tutorial next for one lol
i wish i had the time at the moment, barely get chance to keep up with the changes in technology these days.
@@explorationsci5738 Dude i feel you so much, and with everyday its getting faster and faster, i feel like the moment i end watching some AI news there are already twice as much new ones. Its crazy. Great work on the test tho!
@@explorationsci5738Feeling that lol. With all other obligations I just can't get into every new step :D At this point I'm observing and only taking part in the most major upgrades in the tech.
@@EfpeTesterKinda scary tbh.
@@EfpeTesterGonna be better that the Mutant X AI
Can you link a workflow? Is this done manually or in nodes?
We are getting close to full movies made in AI
Great content man! I see you have the same issue with the saturation. I think it comes from the VAE. In automatic1111 you can put none in VAE but it seems ComfyUI needs it always. Is there a way not to include the VAE in comfyUI ?
The VAE is a vital part of the process. When no VAE is set in AUTOMATIC1111, it will use the VAE from the checkpoint file.
ComfyUI is more tranparent on what's happening. The equivalent of using 'None' on AUTOMATIC1111 is to use the VAE from the checkpoint file.
Excellent idea! I was trying to do exactly the same but got stuck on loop logic :/ . If you are willing to share your workflow then I would owe you a pint ;)
How long did it take you to render?
1000 subscribe👍😉 - I am the thousandth subscriber
I've seen an alien! There, in the middle!
txt2video or video2video?
img2video I'm guessing.
this tech is going to be unreal in 10 years
not bad still not quite there yet but ai is gettin there
She looks sleepy but wow.
She looks like a character from "WestWorld" that is ready to destroy humanity! 🤣
Наверно так видишь мир под солями или ещё какой-нибудь дурью.
sale
Wow is this AI?
Что это было? :)
пьяная что ли?