Ai Animation With ComfyUI

Поделиться
HTML-код
  • Опубликовано: 20 дек 2024

Комментарии •

  • @jacekb4057
    @jacekb4057 Год назад +2

    Amazing! I subscribed your newsletter. Thanks for great content

  • @timpruitt7270
    @timpruitt7270 Год назад +6

    Maybe I missed it. Whats the link for the workflow?

    • @MickySarge
      @MickySarge Год назад +1

      stick around till the end

  • @boulimermoz9111
    @boulimermoz9111 Год назад +3

    INCREDIBLE, I'm working hard on this kind of animation and you save me LOOOOT of time, thanks

  • @LoneBagels
    @LoneBagels 10 месяцев назад +1

    Holy cow! This looks amazing! 😮

  • @FudduSawal
    @FudduSawal Год назад +1

    Thats incredible, subscribed you. 🌟🌟,
    Where i can download the workflow?

  • @manimartinez1232
    @manimartinez1232 Год назад +2

    Love what you do and the fact you share it.
    Thanks a lot!

  • @davidmouser596
    @davidmouser596 5 месяцев назад +1

    You: I was not going to get into ComfyUI!
    ComfyUI: JOIN US

    • @sebastiantorresvfx
      @sebastiantorresvfx  5 месяцев назад

      @@davidmouser596 that’s exactly what happened! LOL

  • @Hysteriamute
    @Hysteriamute Год назад +1

    Hey thanks very interesting! Where can I find more info about the Audrey Hepburn comfyui open pose clip @0:21 please?

    • @sebastiantorresvfx
      @sebastiantorresvfx  Год назад

      you want to see a tutorial about what everyone’s calling the Netflix shot? 😆

  • @USBEN.
    @USBEN. Год назад +1

    Impressive workflow.

  • @wholeness
    @wholeness Год назад +1

    My man Sabastian for the win!

  • @themightyflog
    @themightyflog Год назад +1

    I wonder if you use Human Generator + Metatailor for clothing options

    • @sebastiantorresvfx
      @sebastiantorresvfx  11 месяцев назад +1

      I haven’t tried metatailor yet is it like marvellous designer?

  • @TeddyLeppard
    @TeddyLeppard Год назад +2

    Should be possible to re-render and clean up entire movies using techniques similar to this in the very near future. Just create super high-resolution facsimiles of everything in the frame and then re-render it. No more individual frame cleanup.

    • @sebastiantorresvfx
      @sebastiantorresvfx  Год назад +1

      I think the important part of what you said is near future. At the speed we’re seeing the advancements it’s quite likely to be next year,
      But yes I agree it’s going to change our process completely. Wondering how long til Blender integrates stable diffusion as an actual render engine.

    • @tetsuooshima832
      @tetsuooshima832 Год назад +1

      Why would you lose time on super high resolution if you're gonna use AI to re-render it anyway, I don't see the point. Do you know how time-consuming 3D CGI can be ? x)

    • @sebastiantorresvfx
      @sebastiantorresvfx  Год назад

      I wouldn’t call HD and lower ‘super high resolution’ plus when I’m exporting from blender I drop the specs enough where it’s about to churn out frames in a fraction of the time it normally would. If it was taking minutes or more per frame then I wouldn’t bother. But 10-20 seconds I can live with.

  • @AIartIsrael
    @AIartIsrael Год назад +2

    can you please give the link to download the animatediff controlnet model in your PDF. both the open pose and this one is the same file

    • @sebastiantorresvfx
      @sebastiantorresvfx  Год назад

      Thanks for the heads up, check the link in your email again, I've updated the file.

    • @AIartIsrael
      @AIartIsrael Год назад +1

      @@sebastiantorresvfx thank you 🙏

  • @noobplayer-jc9hy
    @noobplayer-jc9hy Год назад +2

    Can i do anime style with it??❤❤

    • @sebastiantorresvfx
      @sebastiantorresvfx  11 месяцев назад +1

      Certainly; I’ll show you how in the next video.

    • @noobplayer-jc9hy
      @noobplayer-jc9hy 11 месяцев назад +1

      ​@@sebastiantorresvfxwhen are we getting it dear??

  • @noobplayer-jc9hy
    @noobplayer-jc9hy Год назад +2

    How to do anime style ??

    • @sebastiantorresvfx
      @sebastiantorresvfx  11 месяцев назад +1

      I’m working on that actually. Stay tuned I’ll come out with something soon.

    • @noobplayer-jc9hy
      @noobplayer-jc9hy 11 месяцев назад

      @@sebastiantorresvfx Thank you so much ,shall be waiting ❤️❤️❤️❤️

  • @the_one_and_carpool
    @the_one_and_carpool Год назад +1

    mine does none of the prompt and comes out blurry where the vae the controlnet model all i found on you thing was controlnet checkpoint no controlnet_gif

    • @sebastiantorresvfx
      @sebastiantorresvfx  Год назад

      If you download the animatediff controlnet specified in the downloads use that in the place of the controlGIF.

  • @hashir
    @hashir Год назад +1

    in comfy you dont have to put the lora in the prompt. it's just all done and controlled in the node itself.

    • @sebastiantorresvfx
      @sebastiantorresvfx  Год назад

      Thank you! I was waiting for someone to let me know lol. Only took a week. Much appreciated 😁

  • @eddiej.l.christian6754
    @eddiej.l.christian6754 Год назад +1

    Hmm Advanced Clip Text Encode and Derfuu ComfyUI ModdedNodes refuse to install using the Comfy UI Manager.

    • @sebastiantorresvfx
      @sebastiantorresvfx  Год назад

      If they’re giving you trouble installing that way you can install them the same way you installed the manager. Google them and clone their GitHub repositories into the custom nodes folder then restart ComfyUI.

  • @SapiensVirtus
    @SapiensVirtus 6 месяцев назад

    hi! beginners question. So if I run a software like ComfyUI locally, does that mean that all AI art, music, works that I generate will be free to use for commercial purposes?or am I violating terms of copyright? I am searching more info about this but I get confused, thanks in advance

  • @stevietee3878
    @stevietee3878 Год назад +1

    Excellent work ! I've just subscribed to your newsletter. Have you tried using the Stable Video Diffusion (SVD) model yet, do you know if controlnet can be used with the SVD model in Comfyui for more control and consistency ?

    • @sebastiantorresvfx
      @sebastiantorresvfx  Год назад +1

      Not yet unfortunately, I have played around with it but until we get some controlnets or something similar for it, it’s kind of a shot in the dark with every generation.

    • @stevietee3878
      @stevietee3878 Год назад +1

      @@sebastiantorresvfx yes, that's what I'm finding, I've been experimenting with the settings for a couple of weeks but it is just trial and error at the moment. I'm sure more motion and camera control will arrive soon.

    • @sebastiantorresvfx
      @sebastiantorresvfx  Год назад +1

      Once it does, I’m just picturing doc brown saying, we’re gonna see some serious $hit! 🤣

  • @Dave102693
    @Dave102693 Год назад +2

    I’m wondering when pika and runway will use blender and unreal engine to make their videos a lot more believable?

    • @HistoryIsAbsurd
      @HistoryIsAbsurd 11 месяцев назад

      Realistically it will probably go the other way around. Unreal and Blender will start their own video generating

  • @ssj3mohan
    @ssj3mohan 11 месяцев назад

    What Kind of PC do you have ? and how long it took to render this Video usin AI [ Full process ] Ty so much

  • @fusedsf
    @fusedsf 9 месяцев назад

    Hey cant seem to find the workflow after joining the newsletter and clicking downloads. Or is it in the LCM animations PDF companion?

    • @sebastiantorresvfx
      @sebastiantorresvfx  9 месяцев назад

      Hi Rob, that’s correct get the PDF, it’ll have the link to the workflow and links to any other models I used in this video. 🙂

  • @aminshallwani9369
    @aminshallwani9369 Год назад +1

    Amazing Thanks for sharing

  • @dkamhaji
    @dkamhaji Год назад +1

    hi Can you share a link to that controlgif controlnet? havent use that one yet.
    thanks! Was that something you renamed? is it the TILE controlnet?

    • @sebastiantorresvfx
      @sebastiantorresvfx  Год назад

      Search for crishhh/animatediff_controlnet in huggingface.

    • @alexandrelouvenaz
      @alexandrelouvenaz Год назад

      Hello did you find the controlGiF.ckpt ? I'm not sur to have the good one.

  • @jank54
    @jank54 Год назад +1

    ERROR diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_k.weight shape '[640, 768]' is invalid for input of size 1310720 ... 4 Models are too much for my 4070 ti

    • @sebastiantorresvfx
      @sebastiantorresvfx  Год назад +1

      Try lower resolutions and lower frame rates and see how you go.

    • @jank54
      @jank54 Год назад +1

      @sebastiantorresvfx Thank you, I kept going -> The lower resolution and less frames perfom much faster! It worked!

    • @sebastiantorresvfx
      @sebastiantorresvfx  Год назад

      Excellent! Glad to hear to hear you kept it up, resolution doesn’t matter, the upscalers that we have and those coming out soon with make it a thing of the past.

  • @lei.1.6
    @lei.1.6 Год назад +1

    Hey !
    I get the following error when running the workflow with the Controlnet enabled, no error when they are disabled but yeah... no controlnet then :
    COMFYUI Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)
    Any idea ?
    Thank you for the great tutorial !

    • @sebastiantorresvfx
      @sebastiantorresvfx  Год назад

      Haven’t seen that before, did you run the NVidia or the CPU comfyUI? And also do you have an Nvidia GPU?

    • @lei.1.6
      @lei.1.6 Год назад

      I'm running the gpu comfy UI. RTX 4090 / 7950x @@sebastiantorresvfx

  • @Ekopop
    @Ekopop Год назад +1

    got yourself a new subscriber, keep up the juicy content its awesome !
    what is your spec ? im afraid my 16gb wont be enough, I'm already struggling going over 15 steps of denoising, but I see you are using 12 with a good result.

    • @sebastiantorresvfx
      @sebastiantorresvfx  Год назад

      Thank you 😊.
      That’s because I’m using the LCM Lora and sampler so I can go as low as 4-6 samples wi try great results. I go more into detail about using LCM in my previous two videos. Definitely worth playing with if you have lower vram. Also try lower resolutions and frame rates(interpolated) followed by upscale after the fact.

    • @Ekopop
      @Ekopop Год назад

      @@sebastiantorresvfx I absolutely will, and finger crossed my computer won't blow up ahah

    • @Ekopop
      @Ekopop Год назад

      @@sebastiantorresvfx upscales results are ... meeh

  • @mysticacreations3188
    @mysticacreations3188 9 месяцев назад

    Workflow link?

  • @imtaha964
    @imtaha964 Год назад +1

    you save my life bro
    love u so much😍😍🥰🥰

    • @imtaha964
      @imtaha964 Год назад

      plis make that video.

  • @the_one_and_carpool
    @the_one_and_carpool Год назад

    can i combine this with the comfy ui warpfusion work flow

  • @CHARIOTangler
    @CHARIOTangler Год назад +1

    Where's the link to the workflow?

    • @sebastiantorresvfx
      @sebastiantorresvfx  Год назад

      Link is in the description

    • @JonRowlison
      @JonRowlison Год назад

      @@sebastiantorresvfx Sort of... the link to how you can sign up for the NEWSLETTER that probably has the link to the workflow is in the description... the link to the workflow isn't in the description. :)

  • @sandeepm809
    @sandeepm809 5 месяцев назад

    sdxl version ??

  • @victorhlucas
    @victorhlucas Год назад +1

    very impressive stuff. I'd like to subscribe but my anti-virus app says your website is compromised :(

    • @sebastiantorresvfx
      @sebastiantorresvfx  Год назад

      Weird, haven’t heard that before. No stress I got you. check out the new link in the description.

    • @victorhlucas
      @victorhlucas Год назад +1

      @@sebastiantorresvfx thanks, new link worked fine. Who knows maybe anti-virus was being over-conservative

  • @MrDebranjandutta
    @MrDebranjandutta Год назад

    never received newsletter and the json to this, sad

    • @sebastiantorresvfx
      @sebastiantorresvfx  Год назад

      It says the email has been sent already, check your spam folder 📁.

  • @alishkaBey
    @alishkaBey Год назад +1

    great! I would like to see that with IPAdapters :D

  • @ALFTHADRADDAD
    @ALFTHADRADDAD Год назад +1

    Fuck yeah

  • @guruware8612
    @guruware8612 Год назад

    maybe nice to play with it, but,...
    why not doing such simple animations just in blender, or any other dcc-app ?
    this will be useless in a real job, there are customers and art-directors who want exactly what they will pay for, not some random generated something.

  • @ALFTHADRADDAD
    @ALFTHADRADDAD 11 месяцев назад

    My ComfyUI keeps tapping out, even at 768x432 Resolution. I've about 12GB of VRAM. The steps are at 8 and starting step is at 4. Basically is telling me it's out of memory unfortunately. Any ideas?

    • @sebastiantorresvfx
      @sebastiantorresvfx  11 месяцев назад

      How much Ram does your pc have?

    • @ALFTHADRADDAD
      @ALFTHADRADDAD 11 месяцев назад

      @@sebastiantorresvfx I think its ~63GB

    • @ALFTHADRADDAD
      @ALFTHADRADDAD 11 месяцев назад +1

      Hey I figured it out; basically I just reduced that frames I was going for. Did a much smaller set of 25 frames, at 768x432. I'll be experimenting further but thanks for your great work@@sebastiantorresvfx

  • @imCybearPunk
    @imCybearPunk Год назад

    Que buen video, como te encuentro en insta o discord?

  • @HO-cj3ut
    @HO-cj3ut Год назад

    AnimateDiff veya Deforum ? for a1111 , thanks