From Stills to Motion - AI Image Interpolation in ComfyUI!

Поделиться
HTML-код
  • Опубликовано: 13 дек 2023
  • Steerable Motion is an amazing new custom node that allows you to easily interpolate a batch of images in order to create cool videos. Turn cats into rodents, people into cars or whatever you fancy!
    Image interpolate has never been so easy and fun :)
    == Links ==
    ComfyUI Workflows: github.com/nerdyrodent/AVeryC...
    == More Stable Diffusion Stuff! ==
    * Faster Stable Diffusions with the LCM LoRA - • LCM LoRA = Speedy Stab...
    * How do I create an animated SD avatar? - • Create your own animat...
    * Installing Anaconda for MS Windows Beginners - • Anaconda - Python Inst...
    * Add anything to your AI art in seconds - • 3 Amazing and Fun Upda...
    * Video-to-Video AI using AnimateDiff - • How To Use AnimateDiff...
    * Consistent Character in ANY pose - • Reposer = Consistent S...
    == Support ==
    Want to support the channel?
    / nerdyrodent
  • НаукаНаука

Комментарии • 212

  • @THISISSMACK
    @THISISSMACK Месяц назад +2

    Exactly the workflow I was looking for! And very well presented, Mister Rodent. Thanks!

  • @ashertique4651
    @ashertique4651 2 месяца назад

    Thank you for simplifying this so beautifully. The other workflows I found for Steerable Motion were so complex and gave so many errors, it was hard to know where to start. This just worked perfectly for me.

  • @latent-broadcasting
    @latent-broadcasting 4 месяца назад +2

    This is amazing! I'm using images with very little variation for creating consistent animations. I'm loving this workflow. Thanks for the tutorial!

    • @NerdyRodent
      @NerdyRodent  4 месяца назад

      Great to hear - thanks!

    • @jeanrenaudviers
      @jeanrenaudviers 3 месяца назад

      @@NerdyRodent Hello ! Does it ask to install custom nodes ?

    • @NerdyRodent
      @NerdyRodent  3 месяца назад

      @@jeanrenaudviers yup! You can click “install missing custom nodes” if you’re missing any custom nodes

  • @emptyheros
    @emptyheros 5 месяцев назад +6

    I love this channel so much, it's my go to for latest info on AI!

  • @KennethEstanislao
    @KennethEstanislao 5 месяцев назад +5

    Awesome workflow!!!

  • @electronicmusicartcollective
    @electronicmusicartcollective 5 месяцев назад +4

    Hello my hero. I've been looking for an AI Morph solution for months and never found it, until now. You have already given me a lot of knowledge about automatic1111 and ComfyUI and I am doing a lot of research myself. Yes, I can't see python console anymore but since ComfyUI everything has become so much easier. Merry and relaxing Christmas from the bottom of my heart. AlbertoSono

  • @MrSporf
    @MrSporf 4 месяца назад +4

    Great video mate! Clear, precise and a free workflow too? What's not to like!

  • @aisestudio
    @aisestudio 3 месяца назад

    😍Thank you so much! It is Amazing tutorial and thank you for workflow!

  • @b0b6O6
    @b0b6O6 5 месяцев назад +3

    cool stuff 😊

  • @mac24seven
    @mac24seven 5 месяцев назад +1

    I was going to ask if there was a way to download the work flow but decided to wait until the end of the video.
    Glad I did!
    8ve got to try this.

    • @Airbender131090
      @Airbender131090 5 месяцев назад +3

      aaaand? where is it i cant find it via ling, tons of workflows but not this one

  • @Evl100077
    @Evl100077 Месяц назад

    SAVED MY ASSIGNMENT LIKE GOD

  • @swannschilling474
    @swannschilling474 5 месяцев назад +3

    Keep em coming!!! 😁 Great one again!! 🤩

  • @francaleu7777
    @francaleu7777 5 месяцев назад +2

    great.. thank you

  • @cyril1111
    @cyril1111 5 месяцев назад +1

    great workflow thanks! tips on how to make the video a little smoother, maybe slowing down the interpolations ?

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад

      One easy way is to ensure less the similarity between the images. Other than that, it’s just a matter of playing with the curves to get what you want.

  • @NoOnexRO
    @NoOnexRO 3 месяца назад

    Thank you for this amazing tutorial. Unfortunately, my personal laptop was not bought keeping in mind that at some point I will be interested in AI image creation. Now, after installing Stable Diffusion and after that ComfyUI, seeing that generating one single image at 512x512 takes about one hour while others do it in seconds... I kinda want to bang my head against a wall. There is a web version of ComfyUI where I could test some workflows. I tried the one from your description but I don't think I managed to find the right models for all the nodes. I'll try more and hopefully, I'll manage to test it. I know you have a simpler version in your Patreon account. I'll try that too the second my salary hits my bank account. :))) "See you" in a week or so! Once again, thank you for everything you post!

  • @LIMBICNATIONARTIST
    @LIMBICNATIONARTIST 5 месяцев назад +2

    🔥🔥

  • @karen-7057
    @karen-7057 4 месяца назад

    thank you! was waiting for this one. finally got it working but still trying to tame the beast .... not there yet

  • @Mckdirt
    @Mckdirt 3 месяца назад

    Hey, great video :)
    I'm trying to find the workflow, I've followed the link, I see loads of workflows but not this one, do you have i direct link to it please? :) Thank you!

  • @PleaseOpenSourceAI
    @PleaseOpenSourceAI 5 месяцев назад +1

    It looks almost like Deforum extension for A1111 was converted for Comfy 👍

  • @aipamagica1
    @aipamagica1 4 месяца назад +1

    Mine is stalling at the box right before output with STMFNet VFI - what is this? I can't find a reference to it in the manager. Thank you!

  • @devoiddesign
    @devoiddesign 5 месяцев назад +8

    Thank you for the tutorials!
    I am stuck at the Batch Creative Node...
    Its saying "Error occurred when executing BatchCreativeInterpolation:
    'ControlNet' object has no attribute 'load_device' "
    What did i miss? I have the controlNet we need installed already and have used controlNet before.

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад +3

      Never seen that one! I'd work through the troubleshooting section as 90% of the time, errors mean that Comfy needs updating.

    • @Akkhar
      @Akkhar 5 месяцев назад +2

      Same thing is happing to me too!!!

    • @CoolAiAvatars
      @CoolAiAvatars 5 месяцев назад +2

      I can confirm that the error does not appear after updating comfyui.

  • @tstone9151
    @tstone9151 4 месяца назад

    Does this do a good job interpolating images of the same scene/shot? I just want something to animate my images for a movie

  • @miguelarce6489
    @miguelarce6489 2 месяца назад

    Great tutorial thank you so much! just 1 question is possible to do interpolation just prompting without images?

    • @NerdyRodent
      @NerdyRodent  2 месяца назад

      Yup, just use the batch prompt schedule

  • @erics7107
    @erics7107 3 месяца назад

    what settings would you use if you wanted to keep the output as close as possible to the output image? I've tried to play with the batch creative interpolation across many settings, but no matter what, the image meaningfully changes - just curious if you've been able to accomplish this?

  • @gatoque12
    @gatoque12 2 месяца назад

    if you wanted to use this to do a video with a script that requires images to change at an specific time of the video, could you use this tool and have each image have your desired lenght or you just cant?

  • @Sharas_ai
    @Sharas_ai 4 месяца назад

    This is a great tutorial, thank you :) but my question is, is it possible to have this video like a loop? I mean getting a smooth transition from the last image to the first.

    • @BobDoyleMedia
      @BobDoyleMedia 4 месяца назад +1

      You can. You can select "closed_loop" to be true in the AnimateDiff group "Uniform context Options" in the upper left.

  • @Andro-Meta
    @Andro-Meta 4 месяца назад

    POM updated this workflow and the node, which kinda breaks this way of doing it. The new version uses sparsectrl rgb. Def worth a look :) And as always, thank you for all that you do! This workflow helped me out a ton.

    • @NerdyRodent
      @NerdyRodent  4 месяца назад +1

      Thanks for the info!

    • @erics7107
      @erics7107 3 месяца назад

      would it be possible to create a new video that uses the new workflow/node? I'm having a hard time figuring out how to get it work @@NerdyRodent

    • @NerdyRodent
      @NerdyRodent  3 месяца назад

      I’ll pop a new workflow up on patreon which also uses sparse rgb in a day or so!

  • @vard_msx
    @vard_msx 4 месяца назад +1

    Very interesting. Thank you for sharing workflow and tutorial. Unfortunately something is strange: my "Batch Creative Interpolation" node looks different, than the one in the tutorial - there is no "control_net_name" entry to select safetensors file. I tried with loaders but it seems Batch Creative Interpolation does not have an input for that. On SteerableMotion github I noticed there is no input for control net in the INPUT_TYPES - did you do your own modification of SteerableMotion? Or is there sometinh I need to do to make it visible?

    • @NerdyRodent
      @NerdyRodent  3 месяца назад +1

      I’ll pop a new workflow up on patreon in a day or so which also uses sparse rgb!

    • @Ramiroy
      @Ramiroy 3 месяца назад

      I have the same issue

  • @davewaldmancreative
    @davewaldmancreative 2 месяца назад

    nerdy. how do you make the time between transitions longer?

  • @ronnykhalil
    @ronnykhalil 5 месяцев назад +6

    have I told you I loved you lately?

    • @Elwaves2925
      @Elwaves2925 5 месяцев назад +3

      Have I told you there's no one else above you?

    • @MLABSofficial
      @MLABSofficial 5 месяцев назад +2

      You fill my heart with gladness?

  • @sugartivi2126
    @sugartivi2126 Месяц назад

    Thanks so much for this! I had a lot of errors in the beginning but it's working well now. I had a couple of basic questions: what are the best ways to work with this workflow and change the video resolution for the final output? Same with aspect ratio - is it possible to do different ones (ie 16:9) using this workflow? tysm!!

    • @sugartivi2126
      @sugartivi2126 Месяц назад

      i also would love to know how to make the video resolution super duper low because i think it would be so cool to try this with pixel art as the inputs images!

    • @NerdyRodent
      @NerdyRodent  Месяц назад +1

      Yup - you can pick any size you like with the SD1.5 range! It's best if it matches your image size though :)

    • @NerdyRodent
      @NerdyRodent  Месяц назад +1

      @@sugartivi2126 to change the output resolution, move the mouse pointer over the resolution and then left click on the width and height to change the values, which are at 512 by default. You can also use the little arrows to increase or decrease the value.

    • @sugartivi2126
      @sugartivi2126 Месяц назад

      @@NerdyRodent thank you!!

  • @ogrekogrek
    @ogrekogrek 5 месяцев назад +1

    thx

  • @THbeto8a
    @THbeto8a Месяц назад +1

    Awesome video, thanks I'm stuck with an error on the STMFNet VFI node.
    "Error occurred when executing STMFNet VFI:
    Error(s) in loading state_dict for STMFNet_Model: Missing key(s) in state_dict: "gauss_kernel", "feature_extractor.conv1.resnext_small.conv1.weight"... and a list too long to copy on the comment

  • @Artishtic
    @Artishtic 5 месяцев назад +2

    epic

  • @nttnrecords5474
    @nttnrecords5474 2 месяца назад

    Hello i am looking for a solution for morphing transitions between 2 videos i need the video to start with the last frame of video 1 and then morph into the first frame of video 2. I am trying to tweak the settings and the graph but i dont seem to find a solution. Also i am having trouble understanding how to tweak the length of the whole animatio

  • @aguyandhisguitars435
    @aguyandhisguitars435 5 месяцев назад

    Out of all your workflows which would be the best first one for a beginner in comfyUI to get started with using? I’m also using an amd 7900xt.

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад +2

      I would just use the default workflow to start with!

  • @stefanopulici9889
    @stefanopulici9889 Месяц назад

    Thank you very much for this.
    Do you know the reason why if I run the workflow with 2 images it takes only 5 minutes, but if I run it with 5 images it takes about 3 hours. The time needed seems to be exponential... :/

  • @Fweshiee
    @Fweshiee Месяц назад

    I have a question. Can you manipulate the aspect ratio of the images and the overall output? For example, I have a few AI Gen Images that were created on Midjourney in 9:16 aspect ratio. Can I input those images to receive the output in the same 9:16 aspect ratio? If not, how do we manipulate that?

  • @davewills6121
    @davewills6121 4 месяца назад

    Question!! My son is doing a project on ''The Mesolithic period'', he wants to use some examples of AI art for his talk on the subject. The problem is, all my attempts using Comfyui are a mutated group of cavemen, he's nervous as it is, he said my AI art will make him the laughing stock. So, are there any simple PROMPTS that i could use to produce good results?. cheers

  • @Dr.R.
    @Dr.R. 5 месяцев назад

    i wanted this for a long time, and i really hoped it would work this time. I even installed everything fresh two times. but still an error :-(... Error: Can't find a usable init.tcl in the following directories... can anyone help? That would be great.... - after hours of trying i found the problem, i am using matrix as the sd/comfy installer, another version of comfy works fine. thx for the workflow!

  • @sugartivi2126
    @sugartivi2126 25 дней назад +1

    Hey again nerdy, this workflow stopped working for me, so I consulted a friend who suggested that I update to the newest ip adapter which has had an update recently (I didn't update comfy itself because I'm using run diffusion which it seems just goes based on the most current comfy version there is anyway). But now, with the new ip adapter in there, and with all my models installed (through the run diffusion manager panel), it can't get past the batch interpolation node. I get this error:
    Error occurred when executing BatchCreativeInterpolation:
    'ModelPatcher' object has no attribute 'get_model_object'
    I've checked that all the models are there, and I also had a friend test out the workflow (with the new ip adapter) while running comfy locally on his machine, and it worked for him! So I'm confused at why the same things wouldn't work while running comfy in the cloud. So strange! any advice is welcome 🙏

  • @EmmaFitzgerald-dp4re
    @EmmaFitzgerald-dp4re 3 месяца назад

    Always love your vids, awesome! Seeing this error which is similar to other comments, but a little diff, everything's been updated,
    Error occurred when executing BatchCreativeInterpolation:
    'NoneType' object has no attribute 'lower'

    • @NerdyRodent
      @NerdyRodent  3 месяца назад

      You can drop me a dm on patreon for help, plus I’ll also be uploading a new workflow version using sparse rgb too!

    • @EmmaFitzgerald-dp4re
      @EmmaFitzgerald-dp4re 3 месяца назад

      @@NerdyRodent thank you, my own stupid fault. I was not using the correct model for Clip Vision. I also had to change the VFI, the one in your workflow gave me an error that I was missing a dll

  • @KingZero69
    @KingZero69 5 месяцев назад +4

    bro… those girls in TANK TOPS with RAT HEADS are freaking HORRIFYING… 😂

    • @NerdyRodent
      @NerdyRodent  4 месяца назад +1

      Seems normal to me! 😂

  • @SKYGGEMUSIC
    @SKYGGEMUSIC Месяц назад

    Looks great, where is the specific workflow ? There are so many in the link (github) !

    • @SKYGGEMUSIC
      @SKYGGEMUSIC Месяц назад

      BatchImageAnimate.png

    • @NerdyRodent
      @NerdyRodent  Месяц назад +1

      Second one from the bottom, with the video link that matches this video 😉

  • @jbiziou
    @jbiziou 4 месяца назад +1

    Great video:) when I loaded the workflow there was no graph image in the preview, and running the prompt it keeps stopping at Batch Creative Interpolation node. any thoughts ? tried loading all missing nodes and restarting,? thanks again and great videos,

    • @NerdyRodent
      @NerdyRodent  4 месяца назад

      The graph should appear fairly quickly, like when I bypass the Ksampler nodes, etc? If it's not outputting a graph, I can only guess that it' is outputting an error of some sort? If so, the node developer may be able to provide some sort of clue, as I've not had that happen as yet!

    • @jbiziou
      @jbiziou 4 месяца назад

      thanks for the reply, I shall keep investigating:) may try a fresh install and run it all again. Cheers . @@NerdyRodent

    • @jbiziou
      @jbiziou 4 месяца назад

      So strange. totally did a clean reinstall of Comfi, all the dependencies, models, missing nodes and still not graph and getting stuck at

    • @jbiziou
      @jbiziou 4 месяца назад

      Ahhh the update everything worked ,!! I got past the spot and got the graph :) !! then ran out of memory, hah Progress !:)

    • @jbiziou
      @jbiziou 4 месяца назад

      Error occurred when executing STMFNet VFI:
      ================================================================
      Failed to import CuPy.

  • @delfinandres
    @delfinandres 11 дней назад

    hi there, excellent tutorial, i keep finding an error about "Ipa Weight" when running the creative batch node, any ideas? i already updated comfy and nodes but the error keeps appearing.

    • @NerdyRodent
      @NerdyRodent  11 дней назад

      ipa_weight for ipadapter should be a float value, so I'd start by checking that nothing says "NaN" or has text where a number should be

  • @TheCcamera
    @TheCcamera 4 месяца назад

    amazing workflow! which clip vision model get used here? also get an error that it failed to import CuPy any hints? or is it because of my 8 gb gpu?

    • @NerdyRodent
      @NerdyRodent  4 месяца назад +1

      It’s the usual Sd1.5 clip vision model. For CuPy it may be that you don’t have an Nvidia card, in which case you can just bypass that node. For more help drop me a dm on www.patreon.com/NerdyRodent 😀

    • @TheCcamera
      @TheCcamera 4 месяца назад

      thank you! works without the frame interpolation; strange I have an NVIDIA card (3070 mobile)@@NerdyRodent

    • @TheCcamera
      @TheCcamera 4 месяца назад +1

      for the record: works all like a charm, tested it on a 4090 now, my 3070 laptop GPU seems to be to weak for this workflow

    • @eyevenear
      @eyevenear 2 месяца назад

      ​@@NerdyRodent can you please send a direct link to the Clip model to make this work? I'm literally on the verge of losing it, everything else works is just that I miss, thank you!

    • @eyevenear
      @eyevenear 2 месяца назад

      @@TheCcamera ​ can you please send a direct link to the Clip model to make this work? I'm literally on the verge of losing it, everything else works is just that I miss, thank you!

  • @mishash
    @mishash 5 месяцев назад

    Hi! This is two parts question.. Can I use lora in prompt? Like "0" :"". Probably not... So how I can use lora in this workflow? And then, sec question - can I use multiple loras with a connection to timeline "0", "4", "20", "36" etc. in your prompt? Probably not either, then maybe just separate lora for each image? ( &if that possible then probably different model to each other image is possible too? ) Thank you

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад

      You can load as many Loras you like - just use the Lora loaders

    • @mishash
      @mishash 5 месяцев назад

      @@NerdyRodent So applying different lora to each specific image is not possible? if I have 2 input images, Julia Roberts and Tom Cruise, the faces I'm getting in output video is neither of them - model changes them both to something else. And so I have to apply loras.

    • @mishash
      @mishash 4 месяца назад

      @@NerdyRodent btw, I didn't know you could manually add lora loader to any random workflow... for some reason I always thought that this would require workflow changes on the code level... thanks for the tip! :)

  • @hashir
    @hashir 5 месяцев назад +2

    How did you get your comfy ui to look all colourful?

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад +2

      You can right-click on any node change it’s colour 😃

  • @Herman_HMS
    @Herman_HMS 5 месяцев назад

    Seems great, but i really struggle to make anything out of it. Tried the settings from video as well as many others and im getting some abominations, flashing images and nothing like source pictures. Any universal settings, that you could recommend?

    • @Herman_HMS
      @Herman_HMS 5 месяцев назад +2

      ok, I managed to solve it, so I'll post for anyone with similar problems. There was something wrong with my IP adapter and clip_vision models. I just redownloaded them and it works fine now.

    • @giancarloorsi4124
      @giancarloorsi4124 5 месяцев назад

      @@Herman_HMS can you please point me to the correct Clip Vision model to use ? I can't find how to download the SD1.5/model.safetensors that nerdy rodent uses in the video

    • @alishkaBey
      @alishkaBey 5 месяцев назад

      @@giancarloorsi4124 If you find just let me know :d it I'm waiting for it too

    • @DraceAI
      @DraceAI 5 месяцев назад

      @@giancarloorsi4124 same cant find it, but might be a case of im just looking past it in the references.

  • @samuelgomez4101
    @samuelgomez4101 29 дней назад

    hello! Where can I find this workflow?? I don't see it in the link provided.

  • @nickmarlow848
    @nickmarlow848 5 месяцев назад +2

    great tutorial! But I keep ketting a Ksampler error at the end. It seems my 3090 is running out of memory! it's my card allready out dated? Or Am I somehow loading in a wrong model?

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад

      I’ve got an old 3090 as well and not had any issues yet! Perhaps if you’re doing more than 500 frames?

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад

      Just seed if you use >12 images then it will need more than 24 GB, so that's another option

    • @the_one_and_carpool
      @the_one_and_carpool 5 месяцев назад

      so i should not try on a 3060 i get same error with original settings@@NerdyRodent

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад +1

      AnimateDiff can use a lot of VRAM so I personally suggest 12GB+, though I think people run it with less

    • @the_one_and_carpool
      @the_one_and_carpool 4 месяца назад

      you are the best thank you i needed a picture to picture morph been looking for months the one you made was the best i seen @@NerdyRodent

  • @furi216
    @furi216 2 месяца назад +1

    Hey, i'm trying to run this on google colab however im getting multiple issues: i get 8 identical images generates (with a batch size of 8) and the STMFNet VFI is not working - it's trying to download the model from multiple directories but all of them are 404. i found one on huggingface, but when i try to run it i get this: Error(s) in loading state_dict for STMFNet_Model:
    Missing key(s) in state_dict: and a bunch of parameters. what could be wrong?

    • @THbeto8a
      @THbeto8a Месяц назад

      I have the same issue

  • @IdgrafixCh
    @IdgrafixCh 4 месяца назад

    Hi there, Thanks a lot for your great tutos! I think there must have been an update that causes the following error ("Error occurred when executing BatchCreativeInterpolation:
    BatchCreativeInterpolationNode.combined_function() got an unexpected keyword argument 'positive'). The "Batch Creative Interpolation" node seems a bit messed up after the update.

    • @NerdyRodent
      @NerdyRodent  3 месяца назад +2

      Make sure to update ComfyUI as well as your custom nodes!

    • @santobosco5008
      @santobosco5008 3 месяца назад +1

      me too! there are not updates visible, did you work it out?

    • @IdgrafixCh
      @IdgrafixCh 3 месяца назад

      @@santobosco5008 It was working fine until a recent update which seems to have messed up the "Batch Creative Interpolation" node.

    • @NerdyRodent
      @NerdyRodent  3 месяца назад

      I’ll upload a new workflow to patreon in a day or so which also uses sparse control!

  • @haydenmartin5866
    @haydenmartin5866 4 месяца назад

    Error(s) in loading state_dict for ResamplerImport:
    size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1024]).
    no idea what this means or how to fix it?

    • @NerdyRodent
      @NerdyRodent  4 месяца назад

      As a guess, it could be that you're trying to use an SDXL model?

    • @haydenmartin5866
      @haydenmartin5866 4 месяца назад

      It’s realisticvision (1.5). I think it’s to do with the controlnet in the batch creative interpolation. Trying to download the CN that shows upon loading your workflow

  • @yen.p2044
    @yen.p2044 2 месяца назад

    Hi, I am fascinated by this workflow! Can I use it on m1 MacBook Pro? Every time I try, ComfyUI disconnects in the AnimateDiff process.

    • @NerdyRodent
      @NerdyRodent  2 месяца назад

      I don't have a MacBook, I'm afraid :/

  • @immeb71
    @immeb71 5 месяцев назад

    Thanks for the lesson. But I can't get the workflow to work
    Error in the sampler, replacing it with a standard one does not help.
    Error occurred when executing KSampler Adv. (Efficient):
    Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад +1

      Remember to check the troubleshooting at the top. 90% of the time you need to update your ComfyUI install or custom nodes

  • @suganesan1
    @suganesan1 Месяц назад

    so, how long it takes you to complete . I have 4070 super, mine stuck at last ksampler for while, nothing changes. in cmd it shows loading 4 new models but nothing loading

    • @NerdyRodent
      @NerdyRodent  Месяц назад

      Couple of minutes, depending on length

  • @alonsogarrote8898
    @alonsogarrote8898 Месяц назад

    So, I watched the video and see this is for SD 1.5?, can you clarify....

  • @chronoxofficial
    @chronoxofficial 4 месяца назад

    Looks amazing! Unfortunately I'm getting an error regarding the IP adapterModelLoader: 'NoneType' object has no attribute 'lower' And a bunch of lines in a file called 'execution.py' that are faulty. Any idea? I updated to the latest version

    • @NerdyRodent
      @NerdyRodent  4 месяца назад +1

      For support, see www.patreon.com/NerdyRodent 😃

    • @chronoxofficial
      @chronoxofficial 4 месяца назад

      Done 😁@@NerdyRodent

    • @asishkumarpadhy3156
      @asishkumarpadhy3156 4 месяца назад

      Hi I am having the same problem I can't get your solution please let me know as well 🙏!

  • @KittisupTungyasub
    @KittisupTungyasub Месяц назад

    I choose close loop by true, but my video still not loop.. How to do that ?

  • @Epicfuzz
    @Epicfuzz 3 месяца назад

    I keep getting this error when it runs through the batch interpolation node "BatchCreativeInterpolationNode.combined_function() got an unexpected keyword argument 'cn_start_at'" Anyone have any thoughts??

    • @NerdyRodent
      @NerdyRodent  3 месяца назад

      I’ll drop an updated version which also uses sparse control on patreon in a day or two 😀

  • @Semi-Cyclops
    @Semi-Cyclops 5 месяцев назад +2

    anyone else getting this error 'ControlNet' object has no attribute 'load_device'?

  • @user-lt8hu3zp7b
    @user-lt8hu3zp7b Месяц назад

    SyntaxError: Unexpected non-whitespace character after JSON at position 4 (line 1 column 5) it's a png so having a hard time opening a direct json file to fix it even if i save workflow as json it just opens the image

    • @polsemad1
      @polsemad1 Месяц назад

      did you solve this?

    • @PandAttack80
      @PandAttack80 Месяц назад

      I have the same problem, did you solve it?

  • @bottonegiulio
    @bottonegiulio 4 месяца назад

    Very interesting, but I can not find the workflow using the link above, any clue?

    • @NerdyRodent
      @NerdyRodent  4 месяца назад +1

      For support, see www.patreon.com/NerdyRodent 😀

  • @tonon_AI
    @tonon_AI Месяц назад

    getting this error: Loop (437,506) with broadcast (465) - not submitting workflow

    • @NerdyRodent
      @NerdyRodent  Месяц назад +1

      Try updating ComfyUI + all custom nodes to their current release

  • @sirolim_
    @sirolim_ 4 месяца назад

    where do i place the controlnet model in the batch creative interpolater?

    • @NerdyRodent
      @NerdyRodent  3 месяца назад

      You can drop me a dm on patreon if you need more help!

  • @mao_miror
    @mao_miror 3 месяца назад

    hi i have a problem: Error occurred when executing KSampler Adv. (Efficient):
    'NoneType' object has no attribute 'to'

    • @NerdyRodent
      @NerdyRodent  3 месяца назад

      NoneType means nothing is being used (hence it not having any attributes), so make sure you’re loading the correct models and that none of the model files are corrupted

    • @mao_miror
      @mao_miror 3 месяца назад

      @@NerdyRodent Hello thanks, I got it working but the picture is totally blurry

    • @NerdyRodent
      @NerdyRodent  3 месяца назад

      I’ll drop a new version which also uses sparse control in patreon in a day or so!

  • @mehradbayat9665
    @mehradbayat9665 3 месяца назад

    Which one of your workflows is the one you presented here, you have a million different .png files...

  • @DerekShenk
    @DerekShenk Месяц назад

    Comfyui is powerful, but I spend many hours trying to get nodes to work that Manager does not correct. I can't seem to find much info on where or how to install STMFNET. I manually copied the pth file but it must not be right because I keep getting RuntimeError: Error(s) in loading state_dict for STMFNet_Model. As others have indicated, when executing the workflow, it throws an error that the path to STMFNet cannot be found. Manual installation has not worked. Any suggestions?

    • @NerdyRodent
      @NerdyRodent  Месяц назад

      Everything worked automatically for me, no manual copies or what not. Could be an out of date install at a guess!

    • @DerekShenk
      @DerekShenk Месяц назад +1

      @@NerdyRodent For anyone struggling with ST-MFNet node, replace it with FILM VFI node and everything works great!

    • @Russtachio
      @Russtachio Месяц назад

      @@DerekShenk Thank you! I was having this error too and it made me give up on this workflow. Gonna go try out FILM VFI right now.

  • @iozsoo
    @iozsoo 3 месяца назад

    I'm getting an error regarding the IP adapterModelLoader: 'NoneType' object has no attribute 'lower' :(

    • @NerdyRodent
      @NerdyRodent  3 месяца назад

      NoneType = the node hasn’t been able to load the model, so make your download isn’t corrupted and you’re using the correct model!

    • @iozsoo
      @iozsoo 3 месяца назад

      Thank you very much! Now it says Error occurred when executing STMFNet VFI, and Failed to import CuPy ☹ It's not my day 😀

    • @NerdyRodent
      @NerdyRodent  3 месяца назад

      @iozsoo cupy is for Nvidia cards so you can just bypass it. There is experimental Linux ROCm support, but I don’t have an AMD card 🫤

    • @iozsoo
      @iozsoo 3 месяца назад

      @@NerdyRodent Unfortunately, I have an RTX 3060, and I've just installed cupy, but it's still failing to import it. Same error, while executing STMFNet VFI ☹

    • @NerdyRodent
      @NerdyRodent  3 месяца назад

      The 3060 should work fine ^^ You can check if others have any similar issues with their Comfy install at github.com/Fannovel16/ComfyUI-Frame-Interpolation/issues

  • @mikelaing8001
    @mikelaing8001 5 месяцев назад +1

    do you need to install cupy seperately?

    • @clenzen9930
      @clenzen9930 5 месяцев назад +1

      I'm stuck on CuPy not installing too. I've added CUDA and cuTENSOR & NCCL. cuDNN wants credentials. Feel like I'm going down the wrong path. comfyui has it's own python / (conda?) environment so I don't know.

    • @mikelaing4859
      @mikelaing4859 5 месяцев назад

      @@clenzen9930 I've not tried installing it yet. Was looking at it earlier, think I need to install cuda tool kit then cupy but was gonna see if anyone had some wisdom to share first.

    • @TheUrbanPassenger
      @TheUrbanPassenger 4 месяца назад

      @@clenzen9930 I also got these problems. It was because of the framerate increase section (STMF Net VFI). I just bypassed it by connecting the KSample directly to the saving section ("video combine") instead of going from KSample to "split image batch". Worked for me. Got good results though.

    • @jbiziou
      @jbiziou 4 месяца назад

      I got stuck at the same spot here, Error occurred when executing STMFNet VFI: Failed to import CuPy, Ill try your hack bypassing the split image Batch. fingers crossed. but Id love to know how to get it to work like Nerdy has it in his video :)

  • @user-mf1nf3qb8h
    @user-mf1nf3qb8h 4 месяца назад +1

    Thanks for the video! There is one problem. Gives an error message. Gives an error message. How to fix it properly? Also, instead of previewing the image, a black screen is constantly displayed. What to do with it?
    ""Error occurred when executing BatchCreativeInterpolation:
    Error(s) in loading state_dict for ImageProjModelImport:
    size mismatch for proj.weight: copying a param with shape torch.Size([3072, 1024]) from checkpoint, the shape in current model is torch.Size([3072, 1280]).""
    I updated everything, so the problem is not in the old version.

    • @NerdyRodent
      @NerdyRodent  4 месяца назад +1

      Hi! Drop me a dm on www.patreon.com/NerdyRodent and I’ll see what I can do 😃

    • @user-mf1nf3qb8h
      @user-mf1nf3qb8h 4 месяца назад

      @@NerdyRodent +

    • @user-mf1nf3qb8h
      @user-mf1nf3qb8h 4 месяца назад

      @@NerdyRodent I wrote to you

    • @carlherner4561
      @carlherner4561 4 месяца назад +1

      updating the ip-adapter models fixed this for me (make sure you had the sd 1.5 plus one as he has in the video)

    • @user-mf1nf3qb8h
      @user-mf1nf3qb8h 4 месяца назад

      @@carlherner4561 Thanks, I've tried a lot. currently working

  • @carsoncarr-busyframes619
    @carsoncarr-busyframes619 5 месяцев назад +3

    I managed to work through errors on about 5 different nodes (updating comfy in the manager fixed most of them) but am hung up at the end where the animate diff loader feeds into the K-sampler. I have mm_sd_v15_v2.ckpt and sqrt_linear (Animate Diff) set which matches the video. the error is-
    "Error occurred when executing ADE_AnimateDiffLoaderWithContext: module 'comfy.ops' has no attribute 'Linear'..."
    I wonder if the K sampler it's plugged into has already been updated because I have an additional field called "sampler state" above "add noise" that I don't see in this video.

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад +1

      Remember to check the troubleshooting at the top. 90% of the time you need to update your ComfyUI install or custom nodes

    • @CoolAiAvatars
      @CoolAiAvatars 5 месяцев назад +2

      I got the same error, I needed to select Update all, not only Comfy Ui in order to work ;)

  • @jasonstetsonofficial
    @jasonstetsonofficial 5 месяцев назад +5

    so you can actually make a video that is several minutes long by just adding a lot of pictures?

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад +3

      Yup! You can indeed slap loads of images in and crank the batch size up 😀

    • @FabioComparelli
      @FabioComparelli 5 месяцев назад +1

      @@NerdyRodent have you tried to interpolate more than 12 images? the vram still poses a problem, with a 3090 12 image is the max for me

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад +3

      Didn't go past 10 myself, but just tried with 13 and it does indeed need more 24GB VRAM then! It may be that the node could be optimised some - probably best to double-check with the node author.

    • @FabioComparelli
      @FabioComparelli 5 месяцев назад +2

      @@NerdyRodentYes @POM is already looking to find a solution, unlimited image input will be game changer 👀

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад +1

      @@FabioComparelli nice 👍

  • @deepuvinil4565
    @deepuvinil4565 4 месяца назад

    Where can i find the workflow

    • @NerdyRodent
      @NerdyRodent  4 месяца назад +1

      I always put the links into the video description 😉

    • @deepuvinil4565
      @deepuvinil4565 4 месяца назад

      @@NerdyRodent the github link ? But i can’t download that file🥹🥹

    • @deepuvinil4565
      @deepuvinil4565 4 месяца назад

      @@NerdyRodent really sorry am new to this .. got it thanks ☺️

  • @user-pk3cx6lf3c
    @user-pk3cx6lf3c 3 месяца назад

    Awesome how nice youi talk in your videos. when i use your workflow i always get this message. I tried to figure out but i am too new to get any ideas of what i am doing.
    Error occurred when executing IPAdapterModelLoader:
    'NoneType' object has no attribute 'lower'
    File "D:\ComfyUI\execution.py", line 152, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\\ComfyUI\execution.py", line 82, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI\execution.py", line 75, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 593, in load_ipadapter_model
    model = comfy.utils.load_torch_file(ckpt_path, safe_load=True)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI\comfy\utils.py", line 12, in load_torch_file
    if ckpt.lower().endswith(".safetensors"):
    ^^^^^^^^^^

  • @user-in1mg9id2u
    @user-in1mg9id2u 4 месяца назад

    Hi I truly like your channel and this time I am trying to execute this workflow but I have some issues, pherhaps you may know what is the problem?
    I kept having some errors and discovered that changing the ipadapter that you use from ip-adapter-plus-sd15 to ip-adapter-sd15 fixes the issue but still having problems.
    After this process the final result is something totally different from the input images, and all the parameters are just like yours in this video.
    Do you have any idea what could be the problem I am facing? Because I have no clue at all......I am lost :(

  • @fpvx3922
    @fpvx3922 5 месяцев назад

    Ist there something similar for automatic 1111? Cool video btw...

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад +3

      Not that I’ve found as yet, but the hunt continues! Let me know if you find anything

  • @Mediiiicc
    @Mediiiicc 3 месяца назад

    Every time I try comfyui I get a dozen errors that need to be solved.

    • @NerdyRodent
      @NerdyRodent  3 месяца назад

      You can drop me a dm on patreon for help!

    • @Mediiiicc
      @Mediiiicc 3 месяца назад

      @@NerdyRodent I've got it working finally, but the output doesn't look anything like the input images. Do you have a guide that uses all similar images at the input? For example, I want to make a video of a person sitting in a chair and then stand up rather than having a bunch of random images at the input.

  • @amrsabry2402
    @amrsabry2402 4 месяца назад

    i want workflow link please , because i am still new in comfyui

    • @NerdyRodent
      @NerdyRodent  4 месяца назад +1

      == Links ==
      ComfyUI Workflows: github.com/nerdyrodent/AVeryComfyNerd

  • @keystothebox
    @keystothebox 3 месяца назад

    seems like most of the plugins in the shared workflow are missing or broken and they are not coming up when searching for custom notes. When loading the graph, the following node types were not found:
    Note Plus (mtb)
    IPAdapterModelLoader
    ACN_SparseCtrlRGBPreprocessor
    ACN_SparseCtrlLoaderAdvanced
    VHS_LoadImagesPath
    ADE_AnimateDiffUniformContextOptions
    ADE_EmptyLatentImageLarge
    ACN_AdvancedControlNetApply
    VHS_SplitImages
    STMFNet VFI
    KSampler Adv. (Efficient)
    ADE_AnimateDiffLoaderWithContext
    VHS_VideoCombine
    BatchCreativeInterpolation

    • @NerdyRodent
      @NerdyRodent  3 месяца назад

      You can drop me a dm on patreon if you need help 😀

  • @lakislambrianides7619
    @lakislambrianides7619 5 месяцев назад

    keep getting errors whatever i do , what the hell does this means Error occurred when executing ADE_AnimateDiffLoaderWithContext:
    module 'comfy.ops' has no attribute 'Linear'......so not worth it

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад +1

      Remember to check the troubleshooting guide at the top. 90% of the time you need to update your ComfyUI or custom nodes

  • @damienprod8934
    @damienprod8934 5 месяцев назад +3

    you can't load as many images as you like - that's not true. A new model is loaded for each image added and here's the error you get:
    WARNING:root:Some parameters are on the meta device device because they were offloaded to the cpu.
    loading in lowvram mode 256.0
    And nothing happen.
    It's a shame, this kind of workflow could have been interesting for long renderings.

    • @NerdyRodent
      @NerdyRodent  4 месяца назад +3

      You can still do long renders like normal, but over 12 input images uses a lot of VRAM. Hopefully the developer will find a way to allow more than that 😃

  • @abhinavaserkar1055
    @abhinavaserkar1055 4 месяца назад

    can you show how to load this in Comfy UI rather directly showing results

    • @NerdyRodent
      @NerdyRodent  4 месяца назад +2

      To load a workflow in ComfyUI, simple drag the workflow onto the canvas! Another way is to click the “load” button and select the file 👍

  • @polystormstudio
    @polystormstudio 5 месяцев назад +2

    I'm getting errors galore

  • @ankethajare9176
    @ankethajare9176 3 месяца назад

    BatchCreativeInterpolationNode.combined_function() got an unexpected keyword argument 'cn_start_at'
    Getting this error.. can someone help me please

    • @cfcmoon1
      @cfcmoon1 3 месяца назад

      Same here.

    • @NerdyRodent
      @NerdyRodent  3 месяца назад +2

      I’ll pop an updated version using sparse control on patreon in a day or two!

    • @cfcmoon1
      @cfcmoon1 3 месяца назад

      What about the new version ? @@NerdyRodent

    • @cfcmoon1
      @cfcmoon1 3 месяца назад

      It's in your patreon and the new workflow works perfectly

  • @TheNexusRealm
    @TheNexusRealm 3 дня назад

    Hello, I have modified the workflow a little and added an upscale image. And I had a question: how to make an upscale using supir? Will this work with video? I don't have much experience.

    • @NerdyRodent
      @NerdyRodent  3 дня назад +1

      Yup, you can upscale the output too! Also remember though that supir isn’t for commercial use.