IC Light Changer For Videos With AnimateDiff and ComfyUI

Поделиться
HTML-код
  • Опубликовано: 19 сен 2024

Комментарии • 99

  • @piorewrzece
    @piorewrzece 2 месяца назад +5

    Superb and clean tutorial. Also the attention to sharing all the links and files is THE BEST.

  • @saymew1878
    @saymew1878 2 месяца назад +2

    Man, the quality is incredible!

  • @BuckwheatV
    @BuckwheatV Месяц назад

    omg it works! such a com[lex process, but very well organized and it actually works! thank you!

  • @张辰-r5o
    @张辰-r5o 2 месяца назад

    This is awesome and very detailed, it saved me a lot of trouble, thumbs up and thanks for your hard work.

  • @INTELIGENCIAARTIFICIAL-eb7zq
    @INTELIGENCIAARTIFICIAL-eb7zq 2 месяца назад

    AMAZING!!!! CONGRATULATIONS BRO

    • @jerrydavos
      @jerrydavos  2 месяца назад

      Thank you so much 😀

  • @matsnilsson7922
    @matsnilsson7922 2 месяца назад

    Brilliant ! Thank you!

  • @jittthooce
    @jittthooce 2 месяца назад

    keep 'em coming

  • @MajomHus
    @MajomHus 2 месяца назад

    Great tutorial!

  • @leolis78
    @leolis78 2 месяца назад

    Great video!

  • @t8levin
    @t8levin 2 месяца назад +2

    Is it possible to change the lighting without changing the main subject in the video? it creates too many deformities and it's not really usable for professional work

    • @jerrydavos
      @jerrydavos  2 месяца назад +1

      Yes, Unfortunately It re-renders the video from scratch with animatediff, which introduces the Artifacts caused by "AI" like morphing, bugged face, deformities.... etc
      This workflow might not be a good fit for professional projects yet.

    • @t8levin
      @t8levin 2 месяца назад

      @@jerrydavos bummer... Would be an absolute game changer for movies and music videos

  • @ZainSarwar5
    @ZainSarwar5 2 месяца назад

    Error
    Motion module 'motionModel_v01.ckpt' is intended for SD1.5 models, but the provided model is type SDXL.

    • @jerrydavos
      @jerrydavos  2 месяца назад

      Use only Sd 1.5 compatible models in this workflow, SDXLs won't work

  • @davimak4671
    @davimak4671 2 месяца назад

    bro, can you make liveportrait + vid to vid workflow? it will be awesome tutorial

    • @jerrydavos
      @jerrydavos  2 месяца назад

      Yes, I'm testing on it, I'll post when I get some good results.

  • @rosederrick9863
    @rosederrick9863 Месяц назад

    "Rebatch" doesn't work when loading long videos. "Load video VHS" still loads all frames into RAM and then it run out of memory. I have tried "Meta Batch Manager" with "Load video VHS" and "Video Combine VHS" which only generated discontinuous scenes. By the way, I have 32G RAM which can only load 20~24 frames to process. I'm still figuring out how to generate long videos.

    • @jerrydavos
      @jerrydavos  Месяц назад +1

      Hey, You have to follow the video from 7:43 to extract frames.
      If still you are facing ram issues while extracting the passes, then you can use the passes exporter workflow from here:
      drive.google.com/drive/folders/1hLU5MhikUe6SnEnEPQc3tKTaNGmFT6p2
      and how it works is here: www.patreon.com/posts/v4-0-controlnet-98846295
      Extract the passes you need for IC light batch workflow, which is depth, mask and frames. Then Follow as normal in the video from 11:00

  • @johnriperti3127
    @johnriperti3127 2 месяца назад

    This is insane!

  • @ParvathyKapoor
    @ParvathyKapoor 2 месяца назад

    Thanks a lot

  • @anitoon22
    @anitoon22 22 дня назад

    bro time travel pormpt walking. how to make it

  • @sam-ss9rn
    @sam-ss9rn 2 месяца назад

    Thank you. I am trying this one. I have question. Whenever running each bach(50), little bit defference occurs. Any way to avoid this difference?

  • @holly1997-AI
    @holly1997-AI 2 месяца назад

    so coool!! Thanks

  • @JosefK2275
    @JosefK2275 Месяц назад

    the background changes too much even when it's off. I am not using a girl but a tennis shoe (I bypassed the face fix nodes), could that be the reason?

    • @jerrydavos
      @jerrydavos  Месяц назад

      FaceFix don't change the scene much...
      you can try changing the "Depth" controlnet model and it's processing node to LineArt Controlnet Model and Lineart Preprocessor .... and play with the strength and end percent, maybe it can help your situation.

  • @byeongmokjang4826
    @byeongmokjang4826 Месяц назад

    It's so cool.
    However, IC Raw Ksampler is experiencing an error.
    "KSamplerAdvanced:
    The size of tensor a (20) must match the size of tensor b (10) at non-singleton dimension 0"
    How can I solve it?

    • @jerrydavos
      @jerrydavos  Месяц назад +1

      The light map should also have same number of frames or greater than the source video.
      Example 1
      Source Video = 5 seconds
      Light Map video = 1 second
      Result : Error - The size of tensor a (20) must match the size of tensor b (10) at non-singleton dimension 0"
      Example 2
      Source Video = 5 seconds
      Light Map video = 5 second
      Result : Successful Render
      Hope this make it clear

    • @byeongmokjang4826
      @byeongmokjang4826 Месяц назад

      @@jerrydavos I am using the source file you provided. helenpeng.mp4 and LightMap.mp4
      The two are equal to 20 seconds.
      Do I need to set frame_load_cap?
      To zero?

  • @user-eq9ge3vm5y
    @user-eq9ge3vm5y 2 месяца назад

    ty😇

  • @anitoon22
    @anitoon22 22 дня назад

    hello is it possible picachu dance ? , or onepiece character moving dance, eating hamburer and so on

    • @jerrydavos
      @jerrydavos  22 дня назад

      Hey, If you are using comfyui - AnimateAnyone or MagicAnimate can do such task, alternatively online Paid option is viggle.ai website

  • @HaoYang-if9tf
    @HaoYang-if9tf 2 месяца назад

    so coool!!

  • @user-eq9ge3vm5y
    @user-eq9ge3vm5y 2 месяца назад +1

    I'm not very familiar with IClight, can it be used with LCM?

    • @jerrydavos
      @jerrydavos  2 месяца назад

      You will need to change scheduler and Sampler steps....a bit experimental.... I have not tried it yet, but others in the community has also successfully used LCM in this workflow

  • @Lucas-uk6fj
    @Lucas-uk6fj 2 месяца назад +1

    Where can I find this handsome original video of her, thank you!It can help everyone, I succeeded,Issue News:
    [SAMLoader#2] The issue where the SAMLoader of the comfyUI-YOLO node conflicted with ComfyUI-Impact-Pack has been patched. Please update ComfyUI-YOLO to the latest version.

    • @jerrydavos
      @jerrydavos  2 месяца назад +2

      Hey I've also mentioned the sources in the description...
      thanks
      Here are the links
      1) www.tiktok.com/@monominjii
      2) instagram.com/reel/C3FyWgYIc_x/
      3) www.youtube.com/@HelenPeng
      4) instagram.com/p/C4Lih8DIhBq/
      5) instagram.com/reel/C19CswgrLD3/
      Some are unknown...

  • @SiMBa27392
    @SiMBa27392 Месяц назад

    Я кстати заметил что расположение промтов очень сильно влияет на результат будь то он написан сначала или в конце....

    • @jerrydavos
      @jerrydavos  Месяц назад

      Yes you are correct, The words written in the starting are prioritized more.

  • @JosefK2275
    @JosefK2275 2 месяца назад

    I don't get why the file output node has a # symbol. Can I change it with a normal save path?

    • @jerrydavos
      @jerrydavos  2 месяца назад +1

      Yes, you can. Just copy and paste your folder path where you want to save the video or the images.

  • @BuckwheatV
    @BuckwheatV Месяц назад

    btw I was trying to figure out how I can decrease level of stylization, so my character would look closer to the original, but I really couldn't, forgive me my newbieness😅 Could you please share some hint?

    • @jerrydavos
      @jerrydavos  Месяц назад +1

      Hey, using the LineArt or tile controlnet would get you closer to same but it's complicated edit ... also it would ruin the light map

    • @BuckwheatV
      @BuckwheatV Месяц назад

      @@jerrydavos Thank you, will try it!

  • @Bemyself1705
    @Bemyself1705 2 месяца назад

    Hi, when I started render, comfyui showed me an error message saying that " The size of tensor a (20) must match the size of tensor b (10) at non-singleton dimension 0" . I used chatgpt to fix it, and gpt kept to tried fixing execution.py codes which doesn't work at all. Have you had this kind of issue before? If you know how to fix it, I would really appreciate it. Thanks for your sharing.

    • @jerrydavos
      @jerrydavos  2 месяца назад

      The number of lightmaps should also be equal to the Source video ...

  • @sdanimationart
    @sdanimationart 2 месяца назад

    Error occurred when executing PreviewImage:
    index 0 is out of bounds for dimension 0 with size 0

    • @jerrydavos
      @jerrydavos  2 месяца назад

      some images are not been able to be generated... please test on different video with a human character, to see if that video is the problem

    • @sdanimationart
      @sdanimationart 2 месяца назад

      @@jerrydavos it worked thanks

  • @calvinherbst304
    @calvinherbst304 2 месяца назад

    Help! First off, thank you so much for the tutorial. I can tell you put a lot of effort into not only the project it's self, but the recourses for sharing this with us. I got everything set up and working correctly and ran a few quick generations to make sure all the models were installed. I then updated my control net custom nodes and now, even when I revert to your original work flow, get the error: Error occurred when executing ACN_AdvancedControlNetApply: ControlBase.set_cond_hint() takes from 2 to 4 positional arguments but 5 were given - any ideas? Thanks!

    • @jerrydavos
      @jerrydavos  2 месяца назад +1

      Hey, I updated all my nodes to check if any errors comes.. but it's working fine on mine.
      Check:
      1) Check only SD 1.5 models are using in the CN models loaders... sdxl controlnets can cause this.
      2) Check Clip text encode nodes and Controlnet Nodes are linked properly, no floating nodes... may be due to some bug it can get corrupt. Download the original workflow again and test.
      3) Disconnect the Optional Mask Input from BOTH controlnets and test. If this fixes that means masks are not created properly.
      4) Replace the SMZ clip text encode ++ nodes to the normal default clip text encode.... then check
      Hopefully the above should help!

  • @Darquesse-y7k
    @Darquesse-y7k Месяц назад

    I cant 5 or 10 second videos only allow to under 1 second why?

    • @jerrydavos
      @jerrydavos  Месяц назад

      Set the frame_load_cap from 10 to 0 inn the load source video node to render all frames

  •  2 месяца назад

    I can't see the "Manager" and "Share" buttons.

    • @jerrydavos
      @jerrydavos  2 месяца назад

      github.com/ltdrdata/ComfyUI-Manager
      Install manager from here, It's a great way to install nodes

    •  2 месяца назад

      @@jerrydavos Thank you 😍

  • @hammad__official8756
    @hammad__official8756 2 месяца назад

    Manager Button is not shown for me , how i install missing node ?

    • @jerrydavos
      @jerrydavos  2 месяца назад +1

      Hey, Sorry if I missed out the manager...
      Download it from here:
      github.com/ltdrdata/ComfyUI-Manager
      and Put it in ComfyUI > Custom nodes

    • @hammad__official8756
      @hammad__official8756 Месяц назад

      @@jerrydavos Work Thanks

  • @최동욱-p4g
    @최동욱-p4g 2 месяца назад

    Is there a way to adjust the strength of checkpoints on this node? I can't find denoising strength in Ksampler 😭😭

    • @jerrydavos
      @jerrydavos  2 месяца назад

      The Start Step and End Step works as the denoising... it's a advanced ksampler setting.... you have to play with the values which value works for you

  • @Ella-book-714
    @Ella-book-714 Месяц назад

    Want to ask, which light source material are from where

    • @jerrydavos
      @jerrydavos  Месяц назад

      it can be made using simple shapes and animating with after effects.
      Else you can search more "Contrasting" geometric pattern animation videos on stock websites, like shutterstock, gettimages, pexels, pixabay etc...
      Also I've included some samples light maps already in the workflow link folder here: drive.google.com/drive/folders/1bFfBs8mkN1HLtT1Xy6wsuOV4jl2WqiO4

  • @束傲军
    @束傲军 Месяц назад

    Why is the video I generate very short, and what parameters do I need to modify

    • @jerrydavos
      @jerrydavos  Месяц назад

      The load cap is set to 10 frames in the load video node ... Increase it to how much you need or put it to 0 to render all frames
      The light map video should also have same or longer length else it will be give error

    • @束傲军
      @束傲军 Месяц назад

      @@jerrydavos i see ,thanks!

  • @MsParkjinwan
    @MsParkjinwan 2 месяца назад

    Can you use this workflow to create a video featuring a specific anime character?

    • @jerrydavos
      @jerrydavos  2 месяца назад +1

      Maybe Possible with loras...

  • @رضامحمدی-ع7ك
    @رضامحمدی-ع7ك 2 месяца назад

    please help
    how to fix error TypeError: T2IAdapterAdvanced.control_merge_inject() missing 1 required positional argument: 'output_dtype'

    • @jerrydavos
      @jerrydavos  2 месяца назад +1

      Hey, Please Update comfyui and all the other nodes, especially controlnet_aux node

    • @رضامحمدی-ع7ك
      @رضامحمدی-ع7ك 2 месяца назад

      @@jerrydavos Hello
      I updated everything
      But it gives the same warning again

    • @رضامحمدی-ع7ك
      @رضامحمدی-ع7ك 2 месяца назад

      ​@@jerrydavosI updated but it gives the same warning again, please help

  • @salomahal7287
    @salomahal7287 2 месяца назад

    Hi i would love to make this workflow work for me but i got a couple problems the output is heavily altered and looks really trippy with me simply inputing a video with ur settings disabling all loras at the start and press queue, there are no errors but the output is nothing at all like the source footage, also with load cap set to 10 it outputs 5 frames only?

    • @jerrydavos
      @jerrydavos  2 месяца назад +1

      1) Make sure light map is also have same number or greater than the source video.
      2) Check Skip frame should be 0.. if you want to start render from beginning
      And for the Trippy part... this workflow re-renders the frames from scratch using the AI models... So the legacy AI Artifacts like bugged hands, faces will surely come in the output.

    • @salomahal7287
      @salomahal7287 2 месяца назад

      @@jerrydavos hey thanks for the reply seems the weirdness came from the upscale image node of the stationary lightmap not beeing set to crop. only thing to add for the future id recommend implementing keyboard shortcuts for the groups so u dont have to scroll through them everytime.
      But big thank you man

  • @ademayaashari1393
    @ademayaashari1393 2 месяца назад

    why i test my video duration 13 second but the result only 1 second output?

    • @jerrydavos
      @jerrydavos  2 месяца назад

      In the source video input node > Change the Frame load cap from 10 frames to 0 to render all frames

    • @ademayaashari1393
      @ademayaashari1393 2 месяца назад

      @@jerrydavos okay, cause i have low vram i decide to render every 10 frame. but every 10 frame the background note same. how to get same result background?

  • @Fucatstory
    @Fucatstory 2 месяца назад

    Hello bro, I have an error in Ksampler. T2IAdapter Advanced.control merge_inject() missing 1 required positional argument: 'output_dtype'

    • @Fucatstory
      @Fucatstory 2 месяца назад

      Do you know the reason why? help me please 🐱

    • @jerrydavos
      @jerrydavos  2 месяца назад +1

      In the manager press:
      1) Update Comyui
      2) Update All
      After updating it should be fixed

    • @Fucatstory
      @Fucatstory 2 месяца назад

      @@jerrydavos I have updated everything, updated the missing notes. But it seems like the workflow in my friend's videos all have this problem. 😢

    • @jerrydavos
      @jerrydavos  2 месяца назад +1

      @@Fucatstory hey please check if all the linked models are there or not... and only SD 1.5 models are used... if problem not solved please contact me on discord ID- jerrydavos I'll help you from there

    • @Fucatstory
      @Fucatstory 2 месяца назад

      @@jerrydavos yes so many thanks ❤️

  • @ESGamingCentral
    @ESGamingCentral 2 месяца назад

    if you don't mind my asking how much vram does this use?

    • @jerrydavos
      @jerrydavos  2 месяца назад +1

      1) Minimum 8GB for img2img workflow,
      2) Vid2Vido may require more....
      I render with img2img workflow in small batches, I have 8GB vram

  • @DragonEspral
    @DragonEspral Месяц назад

    show

  • @Spindonesia
    @Spindonesia 2 месяца назад

    Bruh what RTX u use? can u make tutorial for webui auto 1111?

    • @jerrydavos
      @jerrydavos  2 месяца назад +1

      I have an RTX 3070 Ti laptop GPU 8GB ..... It's a complicated workflow and can be made only using nodes....It can't be made in A1111 yet

    • @calvinherbst304
      @calvinherbst304 2 месяца назад +1

      @@jerrydavos You are a hero for doing this with 8gig VRAM, I'm on a similar setup and apprecaite that this workflow can be run on low VRAM GPU, and that you also designate which settings help run on low VRAM. Keep it up!

  • @asheronscall1234
    @asheronscall1234 2 месяца назад

    What's the source for the clip at 0:25 ?

  • @omarnawar5497
    @omarnawar5497 2 месяца назад