Ai Animation With ComfyUI

Поделиться
HTML-код
  • Опубликовано: 7 июн 2024
  • The Method I use to get consistent animated characters with ComfyUI and Animatediff. BYO video and it's good to go!
    Want to advance your ai Animation skills? Checkout my Patreon: / sebastiantorresvfx
    For the companion PDF with all the links and comfyUI workflow.
    www.sebastiantorresvfx.com/dow...
    You're awesome! Thanks for hanging out with me!

Комментарии • 83

  • @boulimermoz9111
    @boulimermoz9111 5 месяцев назад +3

    INCREDIBLE, I'm working hard on this kind of animation and you save me LOOOOT of time, thanks

  • @manimartinez1232
    @manimartinez1232 5 месяцев назад +2

    Love what you do and the fact you share it.
    Thanks a lot!

  • @jacekb4057
    @jacekb4057 5 месяцев назад +2

    Amazing! I subscribed your newsletter. Thanks for great content

  • @USBEN.
    @USBEN. 5 месяцев назад +1

    Impressive workflow.

  • @LoneBagels
    @LoneBagels 3 месяца назад +1

    Holy cow! This looks amazing! 😮

  • @stevietee3878
    @stevietee3878 5 месяцев назад +1

    Excellent work ! I've just subscribed to your newsletter. Have you tried using the Stable Video Diffusion (SVD) model yet, do you know if controlnet can be used with the SVD model in Comfyui for more control and consistency ?

    • @sebastiantorresvfx
      @sebastiantorresvfx  5 месяцев назад +1

      Not yet unfortunately, I have played around with it but until we get some controlnets or something similar for it, it’s kind of a shot in the dark with every generation.

    • @stevietee3878
      @stevietee3878 5 месяцев назад +1

      @@sebastiantorresvfx yes, that's what I'm finding, I've been experimenting with the settings for a couple of weeks but it is just trial and error at the moment. I'm sure more motion and camera control will arrive soon.

    • @sebastiantorresvfx
      @sebastiantorresvfx  5 месяцев назад +1

      Once it does, I’m just picturing doc brown saying, we’re gonna see some serious $hit! 🤣

  • @aminshallwani9369
    @aminshallwani9369 5 месяцев назад +1

    Amazing Thanks for sharing

  • @Ekopop
    @Ekopop 5 месяцев назад +1

    got yourself a new subscriber, keep up the juicy content its awesome !
    what is your spec ? im afraid my 16gb wont be enough, I'm already struggling going over 15 steps of denoising, but I see you are using 12 with a good result.

    • @sebastiantorresvfx
      @sebastiantorresvfx  5 месяцев назад

      Thank you 😊.
      That’s because I’m using the LCM Lora and sampler so I can go as low as 4-6 samples wi try great results. I go more into detail about using LCM in my previous two videos. Definitely worth playing with if you have lower vram. Also try lower resolutions and frame rates(interpolated) followed by upscale after the fact.

    • @Ekopop
      @Ekopop 5 месяцев назад

      @@sebastiantorresvfx I absolutely will, and finger crossed my computer won't blow up ahah

    • @Ekopop
      @Ekopop 5 месяцев назад

      @@sebastiantorresvfx upscales results are ... meeh

  • @wholeness
    @wholeness 5 месяцев назад +1

    My man Sabastian for the win!

  • @imtaha964
    @imtaha964 5 месяцев назад +1

    you save my life bro
    love u so much😍😍🥰🥰

    • @imtaha964
      @imtaha964 5 месяцев назад

      plis make that video.

  • @TeddyLeppard
    @TeddyLeppard 5 месяцев назад +2

    Should be possible to re-render and clean up entire movies using techniques similar to this in the very near future. Just create super high-resolution facsimiles of everything in the frame and then re-render it. No more individual frame cleanup.

    • @sebastiantorresvfx
      @sebastiantorresvfx  5 месяцев назад +1

      I think the important part of what you said is near future. At the speed we’re seeing the advancements it’s quite likely to be next year,
      But yes I agree it’s going to change our process completely. Wondering how long til Blender integrates stable diffusion as an actual render engine.

    • @tetsuooshima832
      @tetsuooshima832 5 месяцев назад +1

      Why would you lose time on super high resolution if you're gonna use AI to re-render it anyway, I don't see the point. Do you know how time-consuming 3D CGI can be ? x)

    • @sebastiantorresvfx
      @sebastiantorresvfx  5 месяцев назад

      I wouldn’t call HD and lower ‘super high resolution’ plus when I’m exporting from blender I drop the specs enough where it’s about to churn out frames in a fraction of the time it normally would. If it was taking minutes or more per frame then I wouldn’t bother. But 10-20 seconds I can live with.

  • @Hysteriamute
    @Hysteriamute 5 месяцев назад +1

    Hey thanks very interesting! Where can I find more info about the Audrey Hepburn comfyui open pose clip @0:21 please?

    • @sebastiantorresvfx
      @sebastiantorresvfx  5 месяцев назад

      you want to see a tutorial about what everyone’s calling the Netflix shot? 😆

  • @ssj3mohan
    @ssj3mohan 5 месяцев назад

    What Kind of PC do you have ? and how long it took to render this Video usin AI [ Full process ] Ty so much

  • @FudduSawal
    @FudduSawal 5 месяцев назад +1

    Thats incredible, subscribed you. 🌟🌟,
    Where i can download the workflow?

  • @timpruitt7270
    @timpruitt7270 5 месяцев назад +6

    Maybe I missed it. Whats the link for the workflow?

    • @MickySarge
      @MickySarge 5 месяцев назад +1

      stick around till the end

  • @hashir
    @hashir 5 месяцев назад +1

    in comfy you dont have to put the lora in the prompt. it's just all done and controlled in the node itself.

    • @sebastiantorresvfx
      @sebastiantorresvfx  5 месяцев назад

      Thank you! I was waiting for someone to let me know lol. Only took a week. Much appreciated 😁

  • @mysticacreations3188
    @mysticacreations3188 3 месяца назад

    Workflow link?

  • @fusedsf
    @fusedsf 2 месяца назад

    Hey cant seem to find the workflow after joining the newsletter and clicking downloads. Or is it in the LCM animations PDF companion?

    • @sebastiantorresvfx
      @sebastiantorresvfx  2 месяца назад

      Hi Rob, that’s correct get the PDF, it’ll have the link to the workflow and links to any other models I used in this video. 🙂

  • @themightyflog
    @themightyflog 5 месяцев назад +1

    I wonder if you use Human Generator + Metatailor for clothing options

    • @sebastiantorresvfx
      @sebastiantorresvfx  5 месяцев назад +1

      I haven’t tried metatailor yet is it like marvellous designer?

  • @dkamhaji
    @dkamhaji 5 месяцев назад +1

    hi Can you share a link to that controlgif controlnet? havent use that one yet.
    thanks! Was that something you renamed? is it the TILE controlnet?

    • @sebastiantorresvfx
      @sebastiantorresvfx  5 месяцев назад

      Search for crishhh/animatediff_controlnet in huggingface.

    • @alexandrelouvenaz
      @alexandrelouvenaz 5 месяцев назад

      Hello did you find the controlGiF.ckpt ? I'm not sur to have the good one.

  • @lei.1.6
    @lei.1.6 5 месяцев назад +1

    Hey !
    I get the following error when running the workflow with the Controlnet enabled, no error when they are disabled but yeah... no controlnet then :
    COMFYUI Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)
    Any idea ?
    Thank you for the great tutorial !

    • @sebastiantorresvfx
      @sebastiantorresvfx  5 месяцев назад

      Haven’t seen that before, did you run the NVidia or the CPU comfyUI? And also do you have an Nvidia GPU?

    • @lei.1.6
      @lei.1.6 5 месяцев назад

      I'm running the gpu comfy UI. RTX 4090 / 7950x @@sebastiantorresvfx

  • @Dave102693
    @Dave102693 5 месяцев назад +2

    I’m wondering when pika and runway will use blender and unreal engine to make their videos a lot more believable?

    • @HistoryIsAbsurd
      @HistoryIsAbsurd 5 месяцев назад

      Realistically it will probably go the other way around. Unreal and Blender will start their own video generating

  • @ALFTHADRADDAD
    @ALFTHADRADDAD 5 месяцев назад +1

    Fuck yeah

  • @AIartIsrael
    @AIartIsrael 5 месяцев назад +2

    can you please give the link to download the animatediff controlnet model in your PDF. both the open pose and this one is the same file

    • @sebastiantorresvfx
      @sebastiantorresvfx  5 месяцев назад

      Thanks for the heads up, check the link in your email again, I've updated the file.

    • @AIartIsrael
      @AIartIsrael 5 месяцев назад +1

      @@sebastiantorresvfx thank you 🙏

  • @alishkaBey
    @alishkaBey 5 месяцев назад +1

    great! I would like to see that with IPAdapters :D

  • @the_one_and_carpool
    @the_one_and_carpool 5 месяцев назад

    can i combine this with the comfy ui warpfusion work flow

  • @eddiej.l.christian6754
    @eddiej.l.christian6754 5 месяцев назад +1

    Hmm Advanced Clip Text Encode and Derfuu ComfyUI ModdedNodes refuse to install using the Comfy UI Manager.

    • @sebastiantorresvfx
      @sebastiantorresvfx  5 месяцев назад

      If they’re giving you trouble installing that way you can install them the same way you installed the manager. Google them and clone their GitHub repositories into the custom nodes folder then restart ComfyUI.

  • @the_one_and_carpool
    @the_one_and_carpool 5 месяцев назад +1

    mine does none of the prompt and comes out blurry where the vae the controlnet model all i found on you thing was controlnet checkpoint no controlnet_gif

    • @sebastiantorresvfx
      @sebastiantorresvfx  5 месяцев назад

      If you download the animatediff controlnet specified in the downloads use that in the place of the controlGIF.

  • @MrDebranjandutta
    @MrDebranjandutta 5 месяцев назад

    never received newsletter and the json to this, sad

    • @sebastiantorresvfx
      @sebastiantorresvfx  5 месяцев назад

      It says the email has been sent already, check your spam folder 📁.

  • @victorhlucas
    @victorhlucas 5 месяцев назад +1

    very impressive stuff. I'd like to subscribe but my anti-virus app says your website is compromised :(

    • @sebastiantorresvfx
      @sebastiantorresvfx  5 месяцев назад

      Weird, haven’t heard that before. No stress I got you. check out the new link in the description.

    • @victorhlucas
      @victorhlucas 5 месяцев назад +1

      @@sebastiantorresvfx thanks, new link worked fine. Who knows maybe anti-virus was being over-conservative

  • @jank54
    @jank54 5 месяцев назад +1

    ERROR diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_k.weight shape '[640, 768]' is invalid for input of size 1310720 ... 4 Models are too much for my 4070 ti

    • @sebastiantorresvfx
      @sebastiantorresvfx  5 месяцев назад +1

      Try lower resolutions and lower frame rates and see how you go.

    • @jank54
      @jank54 5 месяцев назад +1

      @sebastiantorresvfx Thank you, I kept going -> The lower resolution and less frames perfom much faster! It worked!

    • @sebastiantorresvfx
      @sebastiantorresvfx  5 месяцев назад

      Excellent! Glad to hear to hear you kept it up, resolution doesn’t matter, the upscalers that we have and those coming out soon with make it a thing of the past.

  • @noobplayer-jc9hy
    @noobplayer-jc9hy 5 месяцев назад +2

    How to do anime style ??

    • @sebastiantorresvfx
      @sebastiantorresvfx  5 месяцев назад +1

      I’m working on that actually. Stay tuned I’ll come out with something soon.

    • @noobplayer-jc9hy
      @noobplayer-jc9hy 5 месяцев назад

      @@sebastiantorresvfx Thank you so much ,shall be waiting ❤️❤️❤️❤️

  • @CHARIOTangler
    @CHARIOTangler 5 месяцев назад +1

    Where's the link to the workflow?

    • @sebastiantorresvfx
      @sebastiantorresvfx  5 месяцев назад

      Link is in the description

    • @JonRowlison
      @JonRowlison 5 месяцев назад

      @@sebastiantorresvfx Sort of... the link to how you can sign up for the NEWSLETTER that probably has the link to the workflow is in the description... the link to the workflow isn't in the description. :)

  • @noobplayer-jc9hy
    @noobplayer-jc9hy 5 месяцев назад +2

    Can i do anime style with it??❤❤

    • @sebastiantorresvfx
      @sebastiantorresvfx  5 месяцев назад +1

      Certainly; I’ll show you how in the next video.

    • @noobplayer-jc9hy
      @noobplayer-jc9hy 5 месяцев назад +1

      ​@@sebastiantorresvfxwhen are we getting it dear??

  • @guruware8612
    @guruware8612 5 месяцев назад

    maybe nice to play with it, but,...
    why not doing such simple animations just in blender, or any other dcc-app ?
    this will be useless in a real job, there are customers and art-directors who want exactly what they will pay for, not some random generated something.

  • @imCybearPunk
    @imCybearPunk 5 месяцев назад

    Que buen video, como te encuentro en insta o discord?

  • @ALFTHADRADDAD
    @ALFTHADRADDAD 5 месяцев назад

    My ComfyUI keeps tapping out, even at 768x432 Resolution. I've about 12GB of VRAM. The steps are at 8 and starting step is at 4. Basically is telling me it's out of memory unfortunately. Any ideas?

    • @sebastiantorresvfx
      @sebastiantorresvfx  5 месяцев назад

      How much Ram does your pc have?

    • @ALFTHADRADDAD
      @ALFTHADRADDAD 5 месяцев назад

      @@sebastiantorresvfx I think its ~63GB

    • @ALFTHADRADDAD
      @ALFTHADRADDAD 5 месяцев назад +1

      Hey I figured it out; basically I just reduced that frames I was going for. Did a much smaller set of 25 frames, at 768x432. I'll be experimenting further but thanks for your great work@@sebastiantorresvfx

  • @HO-cj3ut
    @HO-cj3ut 5 месяцев назад

    AnimateDiff veya Deforum ? for a1111 , thanks