Faster Video Generation in A1111 using LCM LoRA | IP Adapter or Tile + Temporal Net Controlnet

Поделиться
HTML-код
  • Опубликовано: 11 ноя 2024

Комментарии • 52

  • @titusfx
    @titusfx 11 месяцев назад

    🎯 Key Takeaways for quick navigation:
    00:00 🎬 *The video discusses using LCM in Automatic 1111 to generate videos 3 to 5 times faster, focusing on image-to-image video generation, which is simple and doesn't require extra extensions.*
    01:12 🎨 *The video demonstrates using V Resolve and Photoshop to generate video frames and prepare them for image-to-image generation.*
    02:48 🖼️ *It shows how to use LCM Laura for image-to-image generation and adjust parameters like sampling steps, CFG scale, and control nets.*
    05:14 🧩 *Setting up temporal net and control net for enhanced image control is explained.*
    06:52 ⚙️ *The video covers generating frames, checking their quality, and using Toas Photo Studio for image adjustments.*
    09:39 🔄 *Adjusting video speed, retime, and scaling settings in V Resolve to enhance the final video quality is discussed.*
    10:49 🔮 *The video mentions using IP Adapter for more style transfer and control in image-to-image generation.*
    15:14 🤖 *LCM Laura is recommended for faster video and image generation, but it's noted that it may not work well with Anim Diff and requires experimentation.*

    • @AI-HowTo
      @AI-HowTo  11 месяцев назад

      thanks, will try to include these in future videos, will do a reverse operation as well if i got the time later, thank you

  • @dreamzdziner8484
    @dreamzdziner8484 5 месяцев назад

    How could I miss this gem of a video for so long. Thank you so much for this mate💛🤝😍

    • @AI-HowTo
      @AI-HowTo  5 месяцев назад +1

      Glad you find it useful, you are welcome.

  • @razvanmatt
    @razvanmatt 11 месяцев назад

    Another great video from you! Thanks a lot for sharing this, great in-depth info!

    • @AI-HowTo
      @AI-HowTo  11 месяцев назад

      Thanks for you kind remarks, hopefully it is useful for some.

  • @ohheyvoid
    @ohheyvoid 11 месяцев назад

    This is such an awesome tutorial. Just found your channel. Excited to binge watch all of your videos. Thank you for sharing!

    • @AI-HowTo
      @AI-HowTo  11 месяцев назад +1

      Thank you, hopefully you will find something useful here and some cool learning tips.

  • @59Marcel
    @59Marcel 11 месяцев назад +1

    This is so good. ai imaging is so fascinating. Thanks for showing us how it works.

    • @AI-HowTo
      @AI-HowTo  11 месяцев назад

      You are welcome, yes it's fun and interesting and will get better and faster overtime.

  • @CGFUN829
    @CGFUN829 3 месяца назад

    wow looks like what i need thank you

  • @aidgmt
    @aidgmt 11 месяцев назад

    영상을 부드럽게 만들수 있을까 했는데.. 여기에 방법이 있군요.. 당신은 최고입니다

  • @RaysAiPixelClips
    @RaysAiPixelClips 11 месяцев назад +3

    Latest Animate Diff update added the LCM sampler.

    • @AI-HowTo
      @AI-HowTo  11 месяцев назад +1

      thanks for the info, will recheck that on A1111, my recent tests on A1111 were not nice, will try again with a fresh install

  • @michail_777
    @michail_777 11 месяцев назад

    Hi.If you don't want to wait for CN to be loaded and unloaded in Automattic, you can go to the settings and set the slider for CN-cache (I don't remember exactly), then you will have CN in memory all the time.But it takes more memory, but generation is faster. Also Optical Flow is in Deforum.And you will need to insert the input video into CN and into the "init" tab. Temporalnet 2 has also appeared. But in order to use it you need to configure something in Automatic.
    Have a nice day

    • @AI-HowTo
      @AI-HowTo  11 месяцев назад +1

      thanks for the info, i dont think it works with 8GB VRAM unfortunately, indeed load and unload make things take a long time, temporalnet 2 file size is also very huge 5.7GB, that could be an issue on my Laptop as well... hopefully soon we get more optimized networks, otherwise, i should start using Runpod more often :)

  • @APOLOVAILS
    @APOLOVAILS 11 месяцев назад

    super cool bro ! Thanks a lot !
    please do one for comfyUI🙏

  • @FifthSparkGaming
    @FifthSparkGaming 11 месяцев назад

    Wow! Incredible tutorial! So much care and precision. I’m sure this video took a while to make + running your experiments. Thank you!!
    (Btw, how much VRAM do you have?)

    • @AI-HowTo
      @AI-HowTo  11 месяцев назад +1

      Thanks, true, 8GB VRAM RTX 3070 Laptop GPU.

  • @Chronos-Aeon
    @Chronos-Aeon 8 месяцев назад

    tried with sd forge, it works perfectly..thanks man.. as you all have python installed you can use the module "moviepy" to get the frames of your videos but also generate the video with the generated images after
    edit:
    i wonder if there is a way to use it in txt2img so we can use openpose rather than softedge so we can have more freedom on what we want (like the environment)

    • @AI-HowTo
      @AI-HowTo  8 месяцев назад

      You are welcome, adnimate diff works better in Text2img with open pose for example, it gives more freedom over the environment, but requires more computing power, better GPU.

  • @julss6635
    @julss6635 11 месяцев назад

    Nice tutorial bro!

  • @souravmandal9264
    @souravmandal9264 7 месяцев назад

    You haven't mention about the model. Also what to put in the VAE folder??

    • @AI-HowTo
      @AI-HowTo  7 месяцев назад

      The video just focuses on how to to things, the model doesnt matter, any model can be used, some models dont require a VAE, so we usually keep the VAE to automatic or select a specific VAE depending on the model specs which tells us if we better use a VAE or if the VAE is baked into the model already... in this video i used a normal model which is aniverse v1.5 ... currently LCM sampler is also official supported in A1111, and there are LCM Models too that dont need a LoRA to be used.

  • @tyalcin
    @tyalcin 11 месяцев назад

    Hi there & thanks for the tut. Quick question. Why does the output image looks better on comfyUi?

    • @AI-HowTo
      @AI-HowTo  11 месяцев назад +1

      there you can use LCM sampler which gives slightly better image than Euler a, in A1111, there are some LCM Sampler implementations, but still not part of the official release of A1111.

  • @breathandrelax4367
    @breathandrelax4367 9 месяцев назад

    By the way from my own it kept iterating on the same picture for the whole set of frames was in the resolve outputs .... any idea wher it come from ?

    • @AI-HowTo
      @AI-HowTo  9 месяцев назад

      not sure, double check that you are using batch folder properly.

    • @breathandrelax4367
      @breathandrelax4367 9 месяцев назад

      @@AI-HowTo thanks for your answer ,
      Well i did check as seperated the input folder and output folder, i'll give it a new shot on less frames coz it took some while to process, comparing to your workflow i added adetailer, do you think it could come from there ?

  • @joelandresnavarro9841
    @joelandresnavarro9841 11 месяцев назад +1

    Good video. I was just wondering what it would be like to make animations with LCM lora. Do you know how an animation could be made with a specific face while preserving its hair, beard, eyebrow, lips, nose... would I have to make a lora (like you have in other video with Elon) or could I do it with an image?

    • @AI-HowTo
      @AI-HowTo  11 месяцев назад

      Yes, possible, Currently IP Adapter controlnet allows you to morph a face check ruclips.net/video/k4ZWJD6W8d0/видео.html where i explain an example with IP Adapter, you just choose the model to be IP Adapter Face and put the face instead of the full body in the first controlnet .... or use face swap technology such as ReActor as in ruclips.net/video/gwId5NUNKDk/видео.html ... making a LoRA for a person really takes time and lots of experimentation, still, best results are achieved using a LoRA with After detailer (but it can takes days and lots of trials to achieve perfect LoRA for a person)

    • @aivideos322
      @aivideos322 11 месяцев назад

      use reactor face swap, formerly ROOP.

  • @sigitpermana8644
    @sigitpermana8644 11 месяцев назад

    I'm not good with logic and prompts, but can you explained this exactly a1111 method on comfyui? thank you

    • @AI-HowTo
      @AI-HowTo  11 месяцев назад

      Will do so if i could in the future.

    • @sigitpermana8644
      @sigitpermana8644 11 месяцев назад

      @@AI-HowTo Thank You so much

  • @krupesh2
    @krupesh2 11 месяцев назад

    I am trying to create LoRA for characters and clothes seperately. I have seen both videos of LoRA clothes and character. Is there any sure shot settings to create LoRA for characters which gives best accuracy in the result image? Because I need to automate the LoRA character where I will just need to select 5 6 images of the person and rest process can be automated?
    Same goes with clothes training LoRA? Can you suggest something to do so? Is it possible? I am training LoRA to get most realistic and accurate face but some faceswap results are better than generated images. Any Suggestions?

    • @AI-HowTo
      @AI-HowTo  11 месяцев назад

      IP Adapter Controlnet, which allows face swap, style application, you might want to google that out
      unfortunately, based on what i have seen, LoRA training doesnt always produce great results, requires testing different settings in some occasions, but it yields better results when done right than face swap... I dont know of any tool for automating the process either...LoRA in general, might take time, because same settings may not work for different datasets, even results produced by some checkpoint might be better than another checkpoint, so lots of testings is required to produce something really good with LoRA.

    • @krupesh2
      @krupesh2 11 месяцев назад

      using IP adapter controlnet inpaint, right? but this is the manual process to mask out face and dresses. I think i will need to find the face edge detection model and parse the image to it, and then the masked image can go through image to image. That's how i can automate the process. Let me know if you have any approach.

  • @breathandrelax4367
    @breathandrelax4367 9 месяцев назад

    it's possible to have LCM A1111 , by adding some line codes in two files of a1111

    • @AI-HowTo
      @AI-HowTo  9 месяцев назад

      Yes, i did that, I saw a post somewhere and followed it a while back, didnt see results to be an improvement over euler a

  • @dragongaiden1992
    @dragongaiden1992 6 месяцев назад

    Friend, you can do it with XL since it is very difficult to guide yourself if you use SD 1.5, basically it is doing everything differently from your video and I find many errors and deformed images

    • @AI-HowTo
      @AI-HowTo  6 месяцев назад

      true, XL is certainly better, but I still dont use it unfortunately on my 8GB video card.

  • @fortniteitemshop4k
    @fortniteitemshop4k 11 месяцев назад

    sir plzz tell me how create videos like bryguy

  • @dlfang
    @dlfang 11 месяцев назад

    要是用lcm生成lora会怎样😏

    • @AI-HowTo
      @AI-HowTo  11 месяцев назад

      not sure, I tested with other LoRA models and it works well... LCM LoRA is Trained using their own training script, so i guess, if we train using their script we just get a LoRA that can help generate images faster and generate a subject at the same time, i think, i have not tried it.

  • @gu9838
    @gu9838 11 месяцев назад

    can still tell its ai. if they can get rid of the flickr and changes that would go so well but progress for sure! in a year or two yeahh lol

    • @AI-HowTo
      @AI-HowTo  11 месяцев назад

      true, it will take few years even based on current progress before flickering disappears, but i think ,that future Videos will be 3D generated and animated for perfect consistency and zero flickering, because Stable difficution will always produce some flickering, even in more complicated animation methods using Animate Diff and other tools in ComfyUI.

  • @musigx
    @musigx 11 месяцев назад

    @AI-HowTo Hey any chance people can contact you for proper business discussion? :)

    • @AI-HowTo
      @AI-HowTo  11 месяцев назад

      sorry, I cannot at the time being.

    • @musigx
      @musigx 11 месяцев назад

      @@AI-HowTo Thx for your answer!