Transforming Your Videos Cannot Be Easier!

Поделиться
HTML-код
  • Опубликовано: 28 сен 2024
  • In this video, we delve into the SD-CN-Animation extension in Stable Diffusion. Creating new videos or modifying existing ones has never been easier. With detailed prompt descriptions, ControlNet, and LoRA, you can produce beautiful animations. The RAFT method significantly reduces the flickering problem.
    Although the additional settings in vid2vid remain a mystery, I will share any new information on my Discord page once I discover more. Keep an eye out!
    📣📣📣I have just opened a Discord page to discuss SD and AI Art - common issues and news - join using the link: / discord
    🤙🏻 Follow me on Medium to get my Newsletter:
    - Get UNLIMITED access to all articles: / membership
    - Laura: / lauracarnevali
    - Intelligent Art: / intelligent
    📰 Medium Article:
    / sd-cn-animation-extension
    📌 Links:
    - GitHub SD-CN-Animation: github.com/vol...
    - RAFT paper: arxiv.org/pdf/...
    00:51 Discord Page - JOIN US ;)
    02:15 Install the SD-CD-Extension
    04:25 Text-to-video animation (txt2vid tab)
    07:31 Processing strength (Step 1) and Fix frame strength (Step 2)
    08:52 Where to find the outputs
    09:16 Video-to-video (vid2vid tab)
    15:07 Conclusions

Комментарии • 70

  • @AIPlayground_
    @AIPlayground_ Год назад +12

    Good video :) I started to using SD-CN-Animation a few days ago and its great, i use similar setting. I have three tips that increase the stability of the output video:
    1. Take the image that you generate in IMG2IMG and in SD-CN add another ControlNet with "reference_only" and put that image in there (so you have two ControlNet, one with tile and another with reference_only with the image from IMG2IMG). The downside of this is that the procesing increase a lot :(, but you will have more coherence and better styilized video.
    2. Sadly if you try to output a video with low resolution (below 512x512) the flickering is greater, so if you want amazing result its better to increase the output resolution.
    3. If your input video has a lot of rapid moves (like a dance video) you will see a lot of ghosting in the video, you can decrase that effect with these settings (after a lot of trial and error):
    "occlusion_mask_flow_multiplier": 3,
    "occlusion_mask_difo_multiplier": 4,
    "occlusion_mask_difs_multiplier": 3,
    "step_1_processing_mode": 0,
    "step_1_blend_alpha": 0,
    If your video doesnt get the ghosting effect, dont put these setting, because it will increase the flickering

    • @LaCarnevali
      @LaCarnevali  Год назад +2

      Thank you! pinned! :)

    • @bradballew3037
      @bradballew3037 Год назад

      Thanks for this. I'm trying to use this to make some 3D rendered characters look a bit more realistic. It works pretty well but still trying to get the most consistent temporal coherence. Any new tips since you posted this? I'm really digging in and running tests to try and see what all the settings do exactly.

  • @gameboardgames
    @gameboardgames 11 месяцев назад

    Thanks so much for this video. Hard to find info on using sd-cn-animation. Your video is super helpful !

  • @Sully365
    @Sully365 Год назад

    you are the first person to explain controlnet in a way that makes sense to me. can't thank you enough, great job!

  • @SantoValentino
    @SantoValentino Год назад

    Thanks Laura 🫡

  • @CarlosGarcia-tk5du
    @CarlosGarcia-tk5du Год назад

    You’re so awesome!! Great teacher. I’m going to join your discord later when I get on my pc. I’ve learned more in 20 mins of watching your videos than most. You explain everything so well.

  • @colinfoo2856
    @colinfoo2856 Год назад +1

    Thank you so Much, Laura for this tutorials. Much appreciated ❤🎉

  • @memoryhero
    @memoryhero Год назад

    What a great tutorial. Excellent presentation visually, aurally, and organizationally!

  • @jrbirdmanpodcast
    @jrbirdmanpodcast Год назад

    This was very helpful Laura. Thank you very much.

  • @Ilovebrushingmyhorse
    @Ilovebrushingmyhorse Год назад

    havent watched the video but saw the thumbnail and "video stable diffusion" sounds like something that would absolutely destroy my pc

  • @TCISBEATHUB
    @TCISBEATHUB Год назад

    🏆🏆🏆 love watching your videos . Thank you for the time you take to make them

  • @alphonsedes8021
    @alphonsedes8021 Год назад

    impressive a good tool if you search really consistant animations. but very long process indeed. Nice video, thanks !

  • @NikhilJaiswal4129
    @NikhilJaiswal4129 Год назад

    warp fusion tutraiol on Mac or runpod
    is there option to use warp fusion for free

  • @aljopot4236
    @aljopot4236 Год назад

    Thanks for this tutorial, I think this is the easiest method.

  • @benmcc7729
    @benmcc7729 Год назад

    Hi Laura, i'm new to this, but I don't have ControlNet in my version (was this removed?)

    • @LaCarnevali
      @LaCarnevali  Год назад

      I don't think it has been removed. Do you have controlnet installed and activated in the extensions tab?

  • @electricdreamer
    @electricdreamer Год назад

    Can you do this with Invoke AI's webui? Or it has to be Automatic1111?

    • @LaCarnevali
      @LaCarnevali  Год назад

      Only Automatic1111 is fully supported

  • @sidbhattnoida
    @sidbhattnoida Год назад

    hi...does this work in a mac?

  • @alphacentauri424
    @alphacentauri424 Год назад

    omg this girl is cute :) and not just cute, but that good kind of cute :)

  • @Comic_Book_Creator
    @Comic_Book_Creator Год назад

    it has problem with wildcard manager ..

  • @timbacodes8021
    @timbacodes8021 Год назад

    Can this method be used to make videos like this? they ran a music video through it: ruclips.net/video/O7-SCsgMgnk/видео.html

    • @LaCarnevali
      @LaCarnevali  Год назад

      Probably it will take too much time, but you can use Ebsynth I suppsoe

    • @ATLJB86
      @ATLJB86 Год назад

      What you want for this is Warpfusion, nothing else is remotely close.

  • @tioilmoai
    @tioilmoai Год назад +3

    Congrats Laura! I’m Brazilian and you are Italian speaking in English in your tutorials, which help me understand better your contents since we have similar native languages! Good job! I hope your channel could grow a lot! Continue giving us SD content! Thanks a lot! My name is Tio Ilmo!

    • @LaCarnevali
      @LaCarnevali  Год назад

      Hi Tio Ilmo! Happy to hear that :)

  • @bonym371
    @bonym371 Год назад +2

    Laura, your videos are perfect. You're so good at explaining, please keep producing content. I subscribed in 30 seconds flat plus your Italian accent, I could listen to it all day long!!

  • @EllenVaman
    @EllenVaman Год назад

    Thanks lovely ;)

  • @gordmills1983
    @gordmills1983 10 месяцев назад

    Have to say… what a nice young lady! Subscribed.

  • @HiggsBosonandtheStrangeCharm
    @HiggsBosonandtheStrangeCharm 2 месяца назад

    Hi Laura, love your videos. I was just trying to follow your tutorial but I don't seem to be able to find SD-CN-Animation tab. I'm loading from the same "Extension index URL' but it mustn't exist any more? If you know a work around, please let me know. Thanks heaps.......

  • @0oORealOo0
    @0oORealOo0 Год назад

    result imo it's... just awful?

  • @creativeleodaily
    @creativeleodaily Год назад

    Amazing VIdeo, I will experiment with this soon, Although I used Img2img and converted a Batch of 30fps 15sec video, it turned out quite good in first attempt.
    I am curious What GPU are you using ?

  • @RiotRemixProductions
    @RiotRemixProductions Год назад

    ✍👍

  • @corujafilmmaker3724
    @corujafilmmaker3724 Год назад

    🎉🎉🎉🎉🎉

  • @lucianodaluz5414
    @lucianodaluz5414 Год назад

    if there was a way to make it stop "Imagine" the image for each frame, would solve this. Is there any? Like, "Use the prompt just for the first frame and do your job. :)

    • @LaCarnevali
      @LaCarnevali  Год назад

      Maybe in the upgrade, but not sure

  • @m_sha3er
    @m_sha3er Год назад

    It takes too much time with multiple CN, like am testing a 2 sec video gives me about 6 hr 45 mints 😅😅

    • @LaCarnevali
      @LaCarnevali  Год назад

      Yeah it took me 4 h for a 11 seconds video! Probably something that needs improvement.

  • @electricdreamer
    @electricdreamer Год назад +1

    Can you do this with Invoke AI's webui? Or it has to be Automatic1111?

  • @Comic_Book_Creator
    @Comic_Book_Creator Год назад

    I just try , and dont see the tab

  • @JadhuGhr-lz8en
    @JadhuGhr-lz8en Год назад

    How to update sd latest 😊

    • @LaCarnevali
      @LaCarnevali  Год назад

      git pull when in the main folder :)

  • @NikhilJaiswal4129
    @NikhilJaiswal4129 Год назад

    places help me with thatThin-Plate-Spline-Motion-Model for SD.ipynb

    • @LaCarnevali
      @LaCarnevali  Год назад

      What about that?

    • @NikhilJaiswal4129
      @NikhilJaiswal4129 Год назад

      in step 3
      AttributeError Traceback (most recent call last)
      in ()
      8 if predict_mode=='relative' and find_best_frame:
      9 from demo import find_best_frame as _find
      ---> 10 i = _find(source_image, driving_video, device.type=='cpu')
      11 print ("Best frame: " + str(i))
      12 driving_forward = driving_video[i:]
      1 frames
      /usr/lib/python3.10/enum.py in __getattr__(cls, name)
      435 return cls._member_map_[name]
      436 except KeyError:
      --> 437 raise AttributeError(name) from None
      438
      439 def __getitem__(cls, name):
      AttributeError: _2D

    • @NikhilJaiswal4129
      @NikhilJaiswal4129 Год назад

      In fact I uploaded same size png,mp4

  • @SouthbayCreations
    @SouthbayCreations Год назад

    Great video Laura, thank you! I joined your Discord also!! 🥳🥳

  • @CrazyBullProduction
    @CrazyBullProduction Год назад

    Thank you so much for the tutorial!
    I unfortunately have an error message after trying to generate the first frame, that says "Torch not compiled with CUDA enabled".
    Do you have some magic information to help? 😀

    • @LaCarnevali
      @LaCarnevali  Год назад +1

      Hi, that is not an issue if you are using a mac. Do you see any other errors?

    • @CrazyBullProduction
      @CrazyBullProduction Год назад +1

      @@LaCarnevali I am using mac, but after generating the first frame i get this message en SD "An exception occurred while trying to process the frame: Torch not compiled with CUDA enabled", and no other error messages in warp

    • @LaCarnevali
      @LaCarnevali  Год назад

      @@CrazyBullProduction try launching the webui.sh using the --no-half command:
      ./webui.sh --no-half

  • @CognitiveEvolution
    @CognitiveEvolution Год назад

    This is one of the clearest explanations I've experienced on Stable Diffusion.

  • @ronnykhalil
    @ronnykhalil Год назад

    Love your videos. Thank you

  • @electrolab2624
    @electrolab2624 Год назад

    Thank you! - I tried the mov2mov extension for automatic1111 and like it much! - Wondering why not so many people use it.

    • @LaCarnevali
      @LaCarnevali  Год назад

      Cause not many people are aware of their existence, which is understandable given the quantity of extensions for A1111

  • @KDashHoward
    @KDashHoward Год назад

    thank you it's really great and well eplained! i was wondering... what's the main difference with this plugin and the temporal kit one? :O

    • @LaCarnevali
      @LaCarnevali  Год назад

      Hello!! :) What are you referring to when saying 'temporal kit plugin'?

  • @zglows
    @zglows Год назад

    Hi Laura! Your videos are awesome. What do you recommend for getting the best animation results, this method that you explain right here or the one from your previous video?

    • @LaCarnevali
      @LaCarnevali  Год назад

      Hi, this is a very good one, but it takes a little of time

  • @twilightfilms9436
    @twilightfilms9436 Год назад

    I hope you can see that the original video has her eyes closed and the output has not. Also, there’s an annoying flickering in the eyes. The reason for this is because the models are not properly trained for img2img. The models are trained with the faces always looking at the viewer. When you train a model for other platforms, like METAHUMANS or else, you have to do it with the eyeballs in all directions and dilatation. I’ve been trying to explain this to several RUclipsrs so the can put the word out, but nobody seems to understand the issue or even worst, they don’t care. So the problem will persist with flickering in the eyes and hair until the models are proper trained. This is of course from the eye of a professional. For TikTok videos I guess is alright?

    • @LaCarnevali
      @LaCarnevali  Год назад

      Hi, happy to discuss this further. For the video, I haven't trained a model but just picked a random one. I think with ebsyinth there is less of this issue - anyway, I will try and train a model looking in all the directions and will test it. Happy to hear different point of view, mainly if constructive (like in this case)