This is actually close to what I need to use it for. I want to create longer music video with scenes taken from different clips of an artist. But these clips are taken from many different sources and different music vids of the artist, so theres no consistency in terms of clothing, look, style, etc. RAVE looks like the answer for creating a base video of different clips to then style animate over after.
I am still missing how RAVE contributes to vid2vid animatediff workflow. @Benji could have achieved the same balerina result with NO RAVE, ONLY animatediff nodes, like for example in his dancing shorts videos. Can anyone explain the use case or benefit of RAVE? Thanks!
RAVE for consistency outfit in this case. And Animatediff make the whole movement smooth. Actually, the AnimateDiff only workflow I did use IPA could do this consistency, but not always, and it require a good image for the IPA to understand what you want.
@@TheFutureThinkerI think get it. So use RAVE when there is totally different clothing in input vid vs output vid. Like in this tutorial, dress converted to bikinis. But if say I want just to replace dancing women's head (face + hair), I should use IP adapter and low denoise, because RAVE is SD1.5 (less realism than SDXL) and is unnecessary for this case
It depends on what is the source video, and style your want to change. But mostly I use Sd 1.5 real Vision, but you can try SDXL with something like Real Vis or Reality XL. Because the width and height are going to be large in SDXL , so it required larger memory size from hardware as well. And more time process on each sampling.
it is a great workflow and patient presentation. thank you! bravo! I am still a beginner, I am just considering video 2 video changing the outfit and make the output video smooth. without complexed requirement of background changing, styling changing, face swap. I keep all of these intact. just change the outfit of the character. in this case, which nodes can I remove from this workflow? by the way, did you try a video with multiple characters, like 3 or 4, Are Rave, ip adaptor, and animatediff robust enough to handle these kinds if videos? thank you.
😊thank you , enjoy and have fun while learning new technology. Honestly, if you are just begin with SD. You can check out videos to learn the foundation first. That way, it will be better.
We're Still about a year or two out from video that are "True" top notch quality, look totally real and can fool the naked eye. But good to see some progress in this realm, its all unreal
i have loaded up your combined workflow from PAtreon but still getting missing nodes, the GET NODE, SET VIDEO AO all come up red. Nothing shows up when I try to install missing custom nodes. Where do I get theses?>>
thanks, i am using a RTX 3060 12GB VRAM and i got mine working by reducing the frame to 24 and removing the last 4x upscale nodes, but how do i get over 1 Sec video animation?
@@TheFutureThinker I've tried a lil stuff with rave + anidiff but found after anidiff it becomes over saturated maybe I need to lower cfg tho, rave pretty good on its own tho to be honest with my current source video very little flicker with just rave
Yes , as I have experience with RAVE only . It will be flickering, cannot avoid. But the beauty of it, it can set to denoising 1 and let the AI to make styling. So we need AnimateDiff to make it back in smooth motion.
I put all model links custom nodes links in the article post link in desc. Now, YT don't like video paste all links on desc. Some other people still use that old way, they will know one day.
Hey cool update rave plus animatediff looks like it might bring the consistency I've been struggling to get while being able to change the character etc
He's applied so many things on top of RAVE, like AnimateDiff, it's hard to see what RAVE is really doing. Seems like he could just ditch RAVE and stick with AnimateDiff.
Yes AnimateDiff only can do consistency style afterall. And RAVE is giving extra memory load. But then previously , I achieved that in my other animatediff workflow. So , applying RAVE is an experiment.
RAVE AnimateDiff Animation - Text Prompt Consistency Styling Workflow Available in OpenArt : openart.ai/workflows/AaH6b9J8oDPHmYenNJtS
Are you running SD locally?
Which workflow performs better on slower pcs? I have a 4000 rtx and 8 g vram and 64 g ram
This is actually close to what I need to use it for. I want to create longer music video with scenes taken from different clips of an artist. But these clips are taken from many different sources and different music vids of the artist, so theres no consistency in terms of clothing, look, style, etc. RAVE looks like the answer for creating a base video of different clips to then style animate over after.
really nice workflow, and you have bring all nodes into good position of use!
Glad it help😉
I am still missing how RAVE contributes to vid2vid animatediff workflow. @Benji could have achieved the same balerina result with NO RAVE, ONLY animatediff nodes, like for example in his dancing shorts videos. Can anyone explain the use case or benefit of RAVE? Thanks!
RAVE for consistency outfit in this case. And Animatediff make the whole movement smooth. Actually, the AnimateDiff only workflow I did use IPA could do this consistency, but not always, and it require a good image for the IPA to understand what you want.
p.s: and yes you are right, the no RAVE AnimateDiff only workflow , I can do that. Hehe XD
@@TheFutureThinkerI think get it. So use RAVE when there is totally different clothing in input vid vs output vid. Like in this tutorial, dress converted to bikinis. But if say I want just to replace dancing women's head (face + hair), I should use IP adapter and low denoise, because RAVE is SD1.5 (less realism than SDXL) and is unnecessary for this case
It depends on what is the source video, and style your want to change. But mostly I use Sd 1.5 real Vision, but you can try SDXL with something like Real Vis or Reality XL. Because the width and height are going to be large in SDXL , so it required larger memory size from hardware as well. And more time process on each sampling.
But for face and hair, yes use IPA is good enough. Sometimes less is more honestly.
it is a great workflow and patient presentation. thank you! bravo! I am still a beginner, I am just considering video 2 video changing the outfit and make the output video smooth. without complexed requirement of background changing, styling changing, face swap. I keep all of these intact. just change the outfit of the character. in this case, which nodes can I remove from this workflow? by the way, did you try a video with multiple characters, like 3 or 4, Are Rave, ip adaptor, and animatediff robust enough to handle these kinds if videos? thank you.
You can check out this one ruclips.net/video/alnrfN6F-lI/видео.htmlsi=hyl1W49BqlfsgmOF
It is better for your scenario.
@@TheFutureThinker thank you for your prompt response, have a lovely day. I will join patreon pro subscription to support you.
😊thank you , enjoy and have fun while learning new technology.
Honestly, if you are just begin with SD. You can check out videos to learn the foundation first. That way, it will be better.
Good to see someone from my alma mater (the University of Illinois) on this paper!
He is a great guy, he did other visual tracking research as well.
We're Still about a year or two out from video that are "True" top notch quality, look totally real and can fool the naked eye. But good to see some progress in this realm, its all unreal
Thats true. Things are going so fast, and every month new tech release
Is this workflow outdated compared to your new ipadapter2 workflow video? This one seems more robust..but older so I wonder 😅
Different way of use. And Rave+AnimateDiff required high gpu hardware to run, this workflow, I have update to IPAv2 also.
cool. do compare with same prompt "flirting smile" - might give her some facial animation
Will try it next video 👍
i have loaded up your combined workflow from PAtreon but still getting missing nodes, the GET NODE, SET VIDEO AO all come up red. Nothing shows up when I try to install missing custom nodes. Where do I get theses?>>
github.com/kijai/ComfyUI-KJNodes
Install this custom node
Nice work. I assume the RAVE with AnimateDiff are going to consume more gpu power right? Because it have multiple sampling to run.
Yes, it takes more , even a 4090 I set it to 250 frames
thanks, i am using a RTX 3060 12GB VRAM and i got mine working by reducing the frame to 24 and removing the last 4x upscale nodes, but how do i get over 1 Sec video animation?
Have you try 60 frames? It will be about 1 second there.
Thanks
The video is wonderful!! could I get a workflow using rave and animatediff?
Looks like saw you in there. Welcome.
can you provide a link with the discord since the one below the video doesnt work
I don't think so.
Increasing the grid size seems to make the output of RAVE more stable and less flickery, but it seems to decrease the image quality.
You can add 2nd sampler to do a refinement after RAVE output.
Will you be posting this as a separate branch workflow on openart please?
Yes sure, the RAVE will be on a new post after test. I think its good to go today.:)
@thomasmiller7678 it's here: openart.ai/workflows/AaH6b9J8oDPHmYenNJtS
@thomasmiller7678 it's here: openart.ai/workflows/AaH6b9J8oDPHmYenNJtS
@@TheFutureThinker I've tried a lil stuff with rave + anidiff but found after anidiff it becomes over saturated maybe I need to lower cfg tho, rave pretty good on its own tho to be honest with my current source video very little flicker with just rave
Yes , as I have experience with RAVE only . It will be flickering, cannot avoid. But the beauty of it, it can set to denoising 1 and let the AI to make styling. So we need AnimateDiff to make it back in smooth motion.
would be great if had all the links used on the video PS: Discord link is expired
I put all model links custom nodes links in the article post link in desc. Now, YT don't like video paste all links on desc. Some other people still use that old way, they will know one day.
where do I find this model: realisticVisionV60B1_v60B1VAE.safetensors? Thanks
this one : civitai.com/models/4201/realistic-vision-v60-b1
Great thanks! @@TheFutureThinker
Hey cool update rave plus animatediff looks like it might bring the consistency I've been struggling to get while being able to change the character etc
Yes very Consistent, it is going to become better
He's applied so many things on top of RAVE, like AnimateDiff, it's hard to see what RAVE is really doing. Seems like he could just ditch RAVE and stick with AnimateDiff.
Yes AnimateDiff only can do consistency style afterall. And RAVE is giving extra memory load.
But then previously , I achieved that in my other animatediff workflow. So , applying RAVE is an experiment.
Do you have webui version?
I am not sure if A1111 webui do have RAVE extension or not.
Hello!Why can't SetNode and GetNode be loaded
Download the missing custom node.
it does not come up as a missing custom node...@@TheFutureThinker
@@TheFutureThinker They are not in the missing nodes
@@Epicfuzz I think theses are indications rather than functional nodes.
🖖🖖🖖
🖖😎🖖
@@TheFutureThinker this stuff with RAVE is mind blowing! Thanks for the walkthrough
Glad it help, new update will be release this 2 days. Stay tune
out of memory ram for me to run
Try lower the frame nums.
@@TheFutureThinker how many vram need to run this workflow
@@megaerror007 above 16GB vram the better