Should be possible to re-render and clean up entire movies using techniques similar to this in the very near future. Just create super high-resolution facsimiles of everything in the frame and then re-render it. No more individual frame cleanup.
I think the important part of what you said is near future. At the speed we’re seeing the advancements it’s quite likely to be next year, But yes I agree it’s going to change our process completely. Wondering how long til Blender integrates stable diffusion as an actual render engine.
Why would you lose time on super high resolution if you're gonna use AI to re-render it anyway, I don't see the point. Do you know how time-consuming 3D CGI can be ? x)
I wouldn’t call HD and lower ‘super high resolution’ plus when I’m exporting from blender I drop the specs enough where it’s about to churn out frames in a fraction of the time it normally would. If it was taking minutes or more per frame then I wouldn’t bother. But 10-20 seconds I can live with.
mine does none of the prompt and comes out blurry where the vae the controlnet model all i found on you thing was controlnet checkpoint no controlnet_gif
If they’re giving you trouble installing that way you can install them the same way you installed the manager. Google them and clone their GitHub repositories into the custom nodes folder then restart ComfyUI.
hi! beginners question. So if I run a software like ComfyUI locally, does that mean that all AI art, music, works that I generate will be free to use for commercial purposes?or am I violating terms of copyright? I am searching more info about this but I get confused, thanks in advance
Excellent work ! I've just subscribed to your newsletter. Have you tried using the Stable Video Diffusion (SVD) model yet, do you know if controlnet can be used with the SVD model in Comfyui for more control and consistency ?
Not yet unfortunately, I have played around with it but until we get some controlnets or something similar for it, it’s kind of a shot in the dark with every generation.
@@sebastiantorresvfx yes, that's what I'm finding, I've been experimenting with the settings for a couple of weeks but it is just trial and error at the moment. I'm sure more motion and camera control will arrive soon.
ERROR diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_k.weight shape '[640, 768]' is invalid for input of size 1310720 ... 4 Models are too much for my 4070 ti
Excellent! Glad to hear to hear you kept it up, resolution doesn’t matter, the upscalers that we have and those coming out soon with make it a thing of the past.
Hey ! I get the following error when running the workflow with the Controlnet enabled, no error when they are disabled but yeah... no controlnet then : COMFYUI Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm) Any idea ? Thank you for the great tutorial !
got yourself a new subscriber, keep up the juicy content its awesome ! what is your spec ? im afraid my 16gb wont be enough, I'm already struggling going over 15 steps of denoising, but I see you are using 12 with a good result.
Thank you 😊. That’s because I’m using the LCM Lora and sampler so I can go as low as 4-6 samples wi try great results. I go more into detail about using LCM in my previous two videos. Definitely worth playing with if you have lower vram. Also try lower resolutions and frame rates(interpolated) followed by upscale after the fact.
@@sebastiantorresvfx Sort of... the link to how you can sign up for the NEWSLETTER that probably has the link to the workflow is in the description... the link to the workflow isn't in the description. :)
maybe nice to play with it, but,... why not doing such simple animations just in blender, or any other dcc-app ? this will be useless in a real job, there are customers and art-directors who want exactly what they will pay for, not some random generated something.
My ComfyUI keeps tapping out, even at 768x432 Resolution. I've about 12GB of VRAM. The steps are at 8 and starting step is at 4. Basically is telling me it's out of memory unfortunately. Any ideas?
Hey I figured it out; basically I just reduced that frames I was going for. Did a much smaller set of 25 frames, at 768x432. I'll be experimenting further but thanks for your great work@@sebastiantorresvfx
Amazing! I subscribed your newsletter. Thanks for great content
Maybe I missed it. Whats the link for the workflow?
stick around till the end
INCREDIBLE, I'm working hard on this kind of animation and you save me LOOOOT of time, thanks
Holy cow! This looks amazing! 😮
Thank you 😊
Thats incredible, subscribed you. 🌟🌟,
Where i can download the workflow?
Love what you do and the fact you share it.
Thanks a lot!
You: I was not going to get into ComfyUI!
ComfyUI: JOIN US
@@davidmouser596 that’s exactly what happened! LOL
Hey thanks very interesting! Where can I find more info about the Audrey Hepburn comfyui open pose clip @0:21 please?
you want to see a tutorial about what everyone’s calling the Netflix shot? 😆
Impressive workflow.
My man Sabastian for the win!
I wonder if you use Human Generator + Metatailor for clothing options
I haven’t tried metatailor yet is it like marvellous designer?
Should be possible to re-render and clean up entire movies using techniques similar to this in the very near future. Just create super high-resolution facsimiles of everything in the frame and then re-render it. No more individual frame cleanup.
I think the important part of what you said is near future. At the speed we’re seeing the advancements it’s quite likely to be next year,
But yes I agree it’s going to change our process completely. Wondering how long til Blender integrates stable diffusion as an actual render engine.
Why would you lose time on super high resolution if you're gonna use AI to re-render it anyway, I don't see the point. Do you know how time-consuming 3D CGI can be ? x)
I wouldn’t call HD and lower ‘super high resolution’ plus when I’m exporting from blender I drop the specs enough where it’s about to churn out frames in a fraction of the time it normally would. If it was taking minutes or more per frame then I wouldn’t bother. But 10-20 seconds I can live with.
can you please give the link to download the animatediff controlnet model in your PDF. both the open pose and this one is the same file
Thanks for the heads up, check the link in your email again, I've updated the file.
@@sebastiantorresvfx thank you 🙏
Can i do anime style with it??❤❤
Certainly; I’ll show you how in the next video.
@@sebastiantorresvfxwhen are we getting it dear??
How to do anime style ??
I’m working on that actually. Stay tuned I’ll come out with something soon.
@@sebastiantorresvfx Thank you so much ,shall be waiting ❤️❤️❤️❤️
mine does none of the prompt and comes out blurry where the vae the controlnet model all i found on you thing was controlnet checkpoint no controlnet_gif
If you download the animatediff controlnet specified in the downloads use that in the place of the controlGIF.
in comfy you dont have to put the lora in the prompt. it's just all done and controlled in the node itself.
Thank you! I was waiting for someone to let me know lol. Only took a week. Much appreciated 😁
Hmm Advanced Clip Text Encode and Derfuu ComfyUI ModdedNodes refuse to install using the Comfy UI Manager.
If they’re giving you trouble installing that way you can install them the same way you installed the manager. Google them and clone their GitHub repositories into the custom nodes folder then restart ComfyUI.
hi! beginners question. So if I run a software like ComfyUI locally, does that mean that all AI art, music, works that I generate will be free to use for commercial purposes?or am I violating terms of copyright? I am searching more info about this but I get confused, thanks in advance
Excellent work ! I've just subscribed to your newsletter. Have you tried using the Stable Video Diffusion (SVD) model yet, do you know if controlnet can be used with the SVD model in Comfyui for more control and consistency ?
Not yet unfortunately, I have played around with it but until we get some controlnets or something similar for it, it’s kind of a shot in the dark with every generation.
@@sebastiantorresvfx yes, that's what I'm finding, I've been experimenting with the settings for a couple of weeks but it is just trial and error at the moment. I'm sure more motion and camera control will arrive soon.
Once it does, I’m just picturing doc brown saying, we’re gonna see some serious $hit! 🤣
I’m wondering when pika and runway will use blender and unreal engine to make their videos a lot more believable?
Realistically it will probably go the other way around. Unreal and Blender will start their own video generating
What Kind of PC do you have ? and how long it took to render this Video usin AI [ Full process ] Ty so much
Hey cant seem to find the workflow after joining the newsletter and clicking downloads. Or is it in the LCM animations PDF companion?
Hi Rob, that’s correct get the PDF, it’ll have the link to the workflow and links to any other models I used in this video. 🙂
Amazing Thanks for sharing
hi Can you share a link to that controlgif controlnet? havent use that one yet.
thanks! Was that something you renamed? is it the TILE controlnet?
Search for crishhh/animatediff_controlnet in huggingface.
Hello did you find the controlGiF.ckpt ? I'm not sur to have the good one.
ERROR diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_k.weight shape '[640, 768]' is invalid for input of size 1310720 ... 4 Models are too much for my 4070 ti
Try lower resolutions and lower frame rates and see how you go.
@sebastiantorresvfx Thank you, I kept going -> The lower resolution and less frames perfom much faster! It worked!
Excellent! Glad to hear to hear you kept it up, resolution doesn’t matter, the upscalers that we have and those coming out soon with make it a thing of the past.
Hey !
I get the following error when running the workflow with the Controlnet enabled, no error when they are disabled but yeah... no controlnet then :
COMFYUI Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)
Any idea ?
Thank you for the great tutorial !
Haven’t seen that before, did you run the NVidia or the CPU comfyUI? And also do you have an Nvidia GPU?
I'm running the gpu comfy UI. RTX 4090 / 7950x @@sebastiantorresvfx
got yourself a new subscriber, keep up the juicy content its awesome !
what is your spec ? im afraid my 16gb wont be enough, I'm already struggling going over 15 steps of denoising, but I see you are using 12 with a good result.
Thank you 😊.
That’s because I’m using the LCM Lora and sampler so I can go as low as 4-6 samples wi try great results. I go more into detail about using LCM in my previous two videos. Definitely worth playing with if you have lower vram. Also try lower resolutions and frame rates(interpolated) followed by upscale after the fact.
@@sebastiantorresvfx I absolutely will, and finger crossed my computer won't blow up ahah
@@sebastiantorresvfx upscales results are ... meeh
Workflow link?
you save my life bro
love u so much😍😍🥰🥰
plis make that video.
can i combine this with the comfy ui warpfusion work flow
I can’t say I’ve tried that yet.
Where's the link to the workflow?
Link is in the description
@@sebastiantorresvfx Sort of... the link to how you can sign up for the NEWSLETTER that probably has the link to the workflow is in the description... the link to the workflow isn't in the description. :)
sdxl version ??
very impressive stuff. I'd like to subscribe but my anti-virus app says your website is compromised :(
Weird, haven’t heard that before. No stress I got you. check out the new link in the description.
@@sebastiantorresvfx thanks, new link worked fine. Who knows maybe anti-virus was being over-conservative
never received newsletter and the json to this, sad
It says the email has been sent already, check your spam folder 📁.
great! I would like to see that with IPAdapters :D
That’s on the agenda 😁
Fuck yeah
maybe nice to play with it, but,...
why not doing such simple animations just in blender, or any other dcc-app ?
this will be useless in a real job, there are customers and art-directors who want exactly what they will pay for, not some random generated something.
My ComfyUI keeps tapping out, even at 768x432 Resolution. I've about 12GB of VRAM. The steps are at 8 and starting step is at 4. Basically is telling me it's out of memory unfortunately. Any ideas?
How much Ram does your pc have?
@@sebastiantorresvfx I think its ~63GB
Hey I figured it out; basically I just reduced that frames I was going for. Did a much smaller set of 25 frames, at 768x432. I'll be experimenting further but thanks for your great work@@sebastiantorresvfx
Que buen video, como te encuentro en insta o discord?
AnimateDiff veya Deforum ? for a1111 , thanks