Have you worked with the animation side of things yet ? I'm struggling to get the animations to come out like the single images are...the results aren't wildly diffeent but almost like it's using a different model... Also how do you have it setup so that you can see the image as it's generating? mine just goes through the whole process then outputs the final image, I mainly want to see what's happening as the anim is processing as currently I have to wait for the whole sequence to be finalised before I see what the result will look like, thanks :)
You're always going to have that weird morphing effect with frame by frame SD animation. No matter what tricks you try there no frame is 100% the same as the last. At least currently. I'm sureone out there is working on it. Ai Video you see now is made with video trained models. What we need is a hybrid or controlnet designed for frame by frame denoising img to img. The current tech is animatediff, deform. - see example on this channel. Personally I like SVD but that has little control.
@@matthallettai well the issue I'm having is not the difference in frames but the initial outcome is completely different when doing a single frame with the same settings as when I hit animation. I said not wildly different but sometimes they are... I train a model to be something which I want for each frame but when I go to animate it's like I've used completely different prompts.. I'm lost
@R1PPA-C Depending on the complexity of your scene the more interpolation the AI does with what it "sees" the examples you've seen of other animations look smooth because of their simplicity in size and materials. Leaves and grass for example with change dramatically between frames no matter what you do. Small details change so much it's not worth it. Trust me it's not you.
Don't bother with any model that claims it's good for it, interiors or architecture. Unless it's a Lora addon to experiment with adding certain looks. My favored checkpoints right now are AlbedoXl 2.1 for exteriors. NightVision. EpicPhotogasm. Real Vision XL some others and spelling is off...I'm away from my PC. Best to download popular XL models that are for photorealism. Portrait examples are OK. And compare them with the XYZ plot script at the bottom of A1111 or Forge. Makes a handy grid for you to compare.
In order to get animation with temporal consistency, you'll need to use something like ComfyUI, which is a browser based node editor. Just diffusing over individual frames with a plugin like this will look very flickery.
you are just a genius. Amazing the job you are doing !
Ah thanks man! Thats so kind of you.
I believe the technical term for an AI enthusiast is a "proompter" :)
Not true. Although language is an integral part, with complex node based processes, it's only a fraction of it.
Have you worked with the animation side of things yet ? I'm struggling to get the animations to come out like the single images are...the results aren't wildly diffeent but almost like it's using a different model...
Also how do you have it setup so that you can see the image as it's generating? mine just goes through the whole process then outputs the final image, I mainly want to see what's happening as the anim is processing as currently I have to wait for the whole sequence to be finalised before I see what the result will look like, thanks :)
You're always going to have that weird morphing effect with frame by frame SD animation. No matter what tricks you try there no frame is 100% the same as the last. At least currently. I'm sureone out there is working on it. Ai Video you see now is made with video trained models. What we need is a hybrid or controlnet designed for frame by frame denoising img to img. The current tech is animatediff, deform. - see example on this channel. Personally I like SVD but that has little control.
@@matthallettai well the issue I'm having is not the difference in frames but the initial outcome is completely different when doing a single frame with the same settings as when I hit animation.
I said not wildly different but sometimes they are... I train a model to be something which I want for each frame but when I go to animate it's like I've used completely different prompts.. I'm lost
@R1PPA-C Depending on the complexity of your scene the more interpolation the AI does with what it "sees" the examples you've seen of other animations look smooth because of their simplicity in size and materials. Leaves and grass for example with change dramatically between frames no matter what you do. Small details change so much it's not worth it. Trust me it's not you.
Thank you for the video. What stable diffusion models can you recommend, specifically for interior design and architecture separately?
Don't bother with any model that claims it's good for it, interiors or architecture. Unless it's a Lora addon to experiment with adding certain looks. My favored checkpoints right now are AlbedoXl 2.1 for exteriors. NightVision. EpicPhotogasm. Real Vision XL some others and spelling is off...I'm away from my PC. Best to download popular XL models that are for photorealism. Portrait examples are OK. And compare them with the XYZ plot script at the bottom of A1111 or Forge. Makes a handy grid for you to compare.
Thanks! Great one
Glad you liked it!
Thanks for tutorial🎉🎉
I want to know the sequence rendering!
In order to get animation with temporal consistency, you'll need to use something like ComfyUI, which is a browser based node editor.
Just diffusing over individual frames with a plugin like this will look very flickery.
DOPE
Finally!!
I hope you found it useful.