Corridor Crew Workflow For Consistent Stable Diffusion Videos

Поделиться
HTML-код
  • Опубликовано: 21 сен 2024

Комментарии • 125

  • @dreamzdziner8484
    @dreamzdziner8484 Год назад +20

    I am happy to see someone who is still exploring all the possibilities on getting the perfect consistent animation. Thank you for explaining everything clearly. Hopefully we will soon have an extension for getting consistent animations on SD.

  • @FortniteJama
    @FortniteJama 10 месяцев назад

    Love the way you always cover the multiple variables and the frustration or patience they require to master. Almost requesting that video, just a vid of total frustration, cause I know it happens.

  • @msampson3d
    @msampson3d Год назад

    Always happy to find another person to subscribe to that is making high quality, easy to follow, technical videos on Stable Diffusion!

  • @elimaravich722
    @elimaravich722 Год назад +4

    a great tip when using LORA for the captioning sequence: add "Dudley" to the "Prefix to add to BLIP caption" and it will apply it to every text file so you dont have to go in and add it to all of them.

  • @yonderboygames
    @yonderboygames Год назад +2

    Thanks for creating this breakdown. I've been wanting to do a deep dive into a way to stylize some of my 3D animations using stable diffusion and didn't know where to even start. I'm subbing!

  • @drinkinslim
    @drinkinslim Год назад

    I'm amazed at the number of people, such as enigmatic_e, saying "anyways" instead of anyway. I don't know if I'll ever get used to it. (Random comment of the day.)

    • @enigmatic_e
      @enigmatic_e  Год назад

      lol habit i guess. I've never been a good speaker or writer. My years spent going back and forth between Mexico and US probably didn't help

  • @THAELITEVR
    @THAELITEVR Год назад

    love your analytical approach to getting shit done, great content

  • @clenzen9930
    @clenzen9930 Год назад +2

    Great video, no complaints. For some people, extra steps might help: use SD to make the 3d renders less 3d renders. Different outfits would increase LoRA flexibility IF that was important. So many variables. Again, that’s for sharing all of this.

  • @lucianamueck
    @lucianamueck Год назад

    I love, love, love your channel. Congratulations for your job!

  • @JuliousNiloy
    @JuliousNiloy Год назад

    Man! This one was packed with information

  • @nibblesd.biscuits4270
    @nibblesd.biscuits4270 Год назад

    Great tip on the fusion render speeding up the overall render. I’m new to resolve and it really did make a world of difference.

  • @IntiArtDesigns
    @IntiArtDesigns Год назад +1

    This is such a wicked tutorial! Thanks bro!

  • @Corruptinator
    @Corruptinator 11 месяцев назад +1

    I think what could work is that you could draw pupil-less/iris-less eyes, as in all "white" eyes so then in post-edit you can animate the pupil/iris in the eyesocket for more consistency.

  • @COAgallery
    @COAgallery Год назад

    Bro. You rock. What a great video. Thank you for taking your time to create this, your work is clean. Subscribed!

  • @firasfadhl
    @firasfadhl Год назад +3

    Flicker Free after effects plugin is giving me a good results . I put the Slow Motion 2 preset and active the motion compensation. I don't like the idea to install a hole software for a deflickering effect. If anyone would do a comparison between the two . To see if there is a big difference.
    Then i maybe consider it if it's really better 🤣.

  • @LoneRanger.801
    @LoneRanger.801 Год назад

    Excellent. Thanks for all the description and details.

  • @UndoubtablySo
    @UndoubtablySo Год назад

    great guide, the possibilities are really exciting

  • @Daxviews
    @Daxviews Год назад

    Hey, just wanted to say thank you for this helpful guide! I followed it step by step (i had some problems with controllnet, it only gave me one tab instead of multiple tabs) and so far my outcome looks pretty decent! Even without the deflicker effect in davinci resolve studio!

    • @enigmatic_e
      @enigmatic_e  Год назад +1

      Hey! So to get multiple tabs go to setting, controlnet and there should be an option to add multiple controlnets. Change the amount and go to apply and restart your. Should have more after that.

    • @Daxviews
      @Daxviews Год назад

      @@enigmatic_e Wow thanks a lot! Just found it and changed it. I hope you will continue making such great videos :D

  • @jasoncow2307
    @jasoncow2307 Год назад

    3D tracking background is awesome,wish to see a lesson

  • @FunwithBlender
    @FunwithBlender Год назад

    great vid, you added a lot extra from the corridors video well done

  • @iamYork_
    @iamYork_ Год назад +1

    Im working on my top AI channels video to pass on my subscribers, as I have retired from the field for now... and you have definitely made the list... Just skimmed this video but you definitely go over all sides of it... from blender to mixamo to i dont even know some of the sites you're using... you are going deep on it... Great job my friend... Keep up the good work... I definitely recommend your channel to anyone who wants to get into generative AI work...

    • @enigmatic_e
      @enigmatic_e  Год назад +1

      Thank you good sir. 🙏

    • @judgeworks3687
      @judgeworks3687 Год назад +1

      I’m one of common sense subscribers (&enigmatic). Have learned sooooo much from both of you, thank you.

    • @iamYork_
      @iamYork_ Год назад +1

      @@judgeworks3687 thank you... I still have a lot of knowledge to pass on but currently just obliged by too many professional time constraints to upload weekly, especially on the tutorial side of it all as a typically tutorial for me can take between 20 to 40 hours to create... Enigmatic has the crown right now in my opinion... For both beginners and more experienced users that dabble in other softwares... He blends them all together... Great person and talented as well... When anyone asks me about other channels to check out when it comes to generative AI for creative purposes... Enigmatic is the first channel I always recommend...

  • @Finofinissimo
    @Finofinissimo Год назад

    Amazing flow, man. Pretty neat tricks.

  • @cafefresh123
    @cafefresh123 Год назад

    I love your videos! And thanks for the work in helping us understand how to easily create with hella tools :) Cheers from San Francisco!

    • @enigmatic_e
      @enigmatic_e  Год назад

      No problem! Bay Area, nice! I’m from San Jose but now living in Germany. Cheers

  • @Brespree23
    @Brespree23 Год назад +1

    For the level of quailty that Corridor Crew had for their edit, is it necessary to make the model in the same way or can i get the same quality following your workflow. Cus I'm hoping to get very unique models to each man/person i put into it.

  • @plamen2110
    @plamen2110 Год назад

    Omg bro! It’s you! 🤯🤩

  • @yobkulcha
    @yobkulcha Год назад +2

    DaVinci Resolve's Magic Mask feature allows you to easily separate objects from their backgrounds.

  • @831digital
    @831digital Год назад

    +1 for the 3d tracking tutorial

  • @klaustrussel
    @klaustrussel Год назад

    Absolutely great video!! I was thinking about using ebsynth but this method seems really fun!! Cheers

  • @annashpitz8415
    @annashpitz8415 Год назад

    Thanks man! you're the best! I really need this for a school project I have coming up, and you've been a life savior!
    did you ever end up making that video about 3d tracking? I need to add my 3d designed objects to the video. could you point me to some info on that please? or even better a link to your video if you made it.. holding my fingers crossed! thank you!

  • @hhkl3bhhksm466
    @hhkl3bhhksm466 Год назад +1

    Hey, just wanted to say your content is great and very informative! I was wondering if you knew how to fix a bad or disfigured looking face from a relatively close distance? I always get weird looking faces while using img2img with controlnet using the canny and control-canny model.

    • @enigmatic_e
      @enigmatic_e  Год назад +1

      Might be pushing the controlnet too much

  • @bigvince4672
    @bigvince4672 7 месяцев назад

    Using Mocha Pro is a cool idea.

  • @legacylee
    @legacylee Год назад

    Runway ML has a handy dandy AI background removal, I personally haven't tried it yet but having used roto brush 2, I think AI just made roto-ing not a pain in the 4$$ lol

  • @trappist95
    @trappist95 Год назад

    About the Lora training, it doesn't matter if the artwork of the character you're trying to train is in different styles or not, it will all translate over.

  • @Trivia2023
    @Trivia2023 Год назад

    Good job

  • @syno3608
    @syno3608 Год назад

    Thank you Thank you Thank you Thank you Thank you Thank you Thank you

  • @themightyflog
    @themightyflog Год назад

    Yes -lease talk about 3D tracking.

  • @adrienberthalon6013
    @adrienberthalon6013 Год назад

    Awesome workflow thx you so much for making that king of video ! I'm having some troubles with reverse stabilization. Everything is working just fine until I press "Unstretch" on CC PowerPin. Then my footage (the face of my character) looses it's scale and become too small, it also "cuts" its own frame while moving (looks like a precomposed object that is cropped because it goes out of frame)... Any ideas of what might goes wrong in here ? Thanks a lot ! 🙏

  • @moon47usaco
    @moon47usaco Год назад

    Yes please. Vids on 3D tracking. Thanks man. +]

  • @VouskProd
    @VouskProd 9 месяцев назад

    Great video man! 👏Full of super useful info.👍 Excellent tip about stabilizing the face.👌Thanks a million.🙏🙏🙏
    Still I'm facing a process issue. How do you get the images out of img2img or deforum to have an easily "keyable" background?
    Even when I put a sequence with flat green background as input, Stable Diffusion (img2img or deforum) draws elements on the background and apply a dull color, I can't key it afterwards in AE. I tried many prompts but with no luck. At 1:44, we can see that your output image has a plain green background, what smart sorcery did you use to achieve this?

    • @enigmatic_e
      @enigmatic_e  9 месяцев назад

      This video is quite outdated unfortunately. A lot of techniques are not necessary anymore with Animatediff. I have two videos about it on my channel.

    • @VouskProd
      @VouskProd 9 месяцев назад

      Thanks, I will check that. Comfy seams great but it's a new world to install and learn (looks like too time-consuming for me now during my current project 😅)

    • @enigmatic_e
      @enigmatic_e  9 месяцев назад

      yea totally get that. I wouldnt switch if youre in middle of something.@@VouskProd

    • @VouskProd
      @VouskProd 8 месяцев назад

      @@enigmatic_e Yup, but still, the more I look at ComfyUI, the more it draws me in, even in the middle of a project.😅
      Anyway, you've already saved my life with the face zoom trick, which worked perfectly in my case 🙏And for my not-so-green background on SD output, well, rotobrush2 is my friend 🙂

  • @SageGoatKing
    @SageGoatKing Год назад

    3d Tracking video would be cool!

  • @Warzak77
    @Warzak77 Год назад +1

    i had the same bug with da vinci studio, it was, either 5s ou 2hours of rendering, i was mad,thank you for the tips

    • @enigmatic_e
      @enigmatic_e  Год назад

      No problem, it was driving me crazy too!

  • @judgeworks3687
    @judgeworks3687 Год назад

    This is so helpful, thanks. Two questions: one: could you pull stills from the cosplay video, alter the stills and then use that for training? Two: is the training only for humans or could I take charcoal drawings I’ve made and train the Lora on drawing style? No figures, just technique and ‘look’

  • @andreyzmey
    @andreyzmey Год назад

    Amazing! Is there any chance you can do a video about the same flow in davinci resolve instead of after effects?

  • @NarimanGafurov
    @NarimanGafurov Год назад

    Thank you bro!

  • @yassiraykhlf5981
    @yassiraykhlf5981 Год назад

    very useful thanks

  • @funnyknowledge7251
    @funnyknowledge7251 Год назад

    Great video, super helpful
    I have a question, whenever I batch using control net, it only produces one frame from the directory I set despite having 200 images
    Any thoughts on how to fix this?

  • @danaetcg
    @danaetcg Год назад

    thank you!

  • @Statvar
    @Statvar Год назад

    Is there a way to save your stable diffusion settings? Like the Noise multiplier for img2img? Also thanks for this in depth tutorial :D

  • @SevenDirty
    @SevenDirty Год назад

    when I go to the github page I cant seem to find the commands you copied to use in powershell when installing kohya (time 16:15). Has it changed or am I confused?

  • @Ghost-wn9cf
    @Ghost-wn9cf Год назад

    Wouldn't running optical flow tracking on the original video, then applying that as a transform backwards and forwards with blending on the generated video smooth things out? I have no idea if something like that was attempted, or how to actually implement it, but I have a feeling it would be nice :D

  • @bigdaveproduction168
    @bigdaveproduction168 Год назад

    Okay and just to know : it's not possible to have her originality method using in their original tutorial anymore ?

  • @RaziqBrown
    @RaziqBrown Год назад

    please make the AE+Blender 3d tracking video

  • @EditArtDesign
    @EditArtDesign Год назад

    Where should these setting lora be located and how to use them is not very clear?? Thank you in advance!

  • @razvanvita6548
    @razvanvita6548 Год назад

    For a better result u should have used the main prompt to describe what u want from mha

    • @enigmatic_e
      @enigmatic_e  Год назад

      Thank you for the advice. This video however is quite outdated now. There’s different methods that give way more consistent results now.

  • @federicogonzalezgalicia3041
    @federicogonzalezgalicia3041 Год назад

    Hi, i get this everytime I try to generate an image from text:
    RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
    any solutions?
    (I have a imac 64GB RAM btw)
    Thank you very much

  • @joeighdotcom
    @joeighdotcom Год назад

    would love to see how you use blender :D

  • @Disorbs
    @Disorbs Год назад

    when i added the green screen video and did the tracking in AE and once i exported it as jpeg mine shows the green screen still how did you remove that to come out black background instead?

    • @enigmatic_e
      @enigmatic_e  Год назад +1

      You would have to use a color key to remove the green.

  • @soyguikai
    @soyguikai Год назад

    Recuerda redireccionar a los nuevos a videos introductorios que ya tienes, por ejemplo el mas reciente de como instalar SD.

  • @bot2.078
    @bot2.078 Год назад

    I have the model "sd15_hed.pth" but the processor, when using it I don't see the "hed.yaml" any suggestions anyone?

  • @Jagaan7972
    @Jagaan7972 Год назад

    The sd makes my background different, because of this I can't remove the background in after effects, what could it be?

  • @dinah6956
    @dinah6956 Год назад

    Is there a way you can create an realistic image from your own background? and turn it into a 3d image? im new to all of this

    • @enigmatic_e
      @enigmatic_e  Год назад

      Mmm I don’t think that’s possible at the moment.

  • @SoniCexe-xq1uy
    @SoniCexe-xq1uy Год назад +1

    Podrias decirme la configuracion de tu PC?

  • @NewMateo
    @NewMateo Год назад

    can you do an updated video on warp fusion? Their new version is much better and way more smoother!

    • @enigmatic_e
      @enigmatic_e  Год назад +1

      I know, I was hired to help with it 😁

    • @NewMateo
      @NewMateo Год назад

      @@enigmatic_e Ahh sorry! 😅 Well you did an incredible job! that warp fusion tech is crazy good!

    • @enigmatic_e
      @enigmatic_e  Год назад +1

      @@NewMateo 😂 all good. Will probably do an updated tut soon

  • @FleischYT
    @FleischYT Год назад

    What to change on your config if using a RTX4090 (24GB VRAM) & 16core cpu ?

    • @enigmatic_e
      @enigmatic_e  Год назад

      Not sure what would change. Maybe you could make the resolution bigger in Stable Diffusion.

  • @klimpaparazzi
    @klimpaparazzi Год назад

    No where in the description you mentioned how to download the JSON files.

    • @enigmatic_e
      @enigmatic_e  Год назад

      under LoraBasicSettings.json: there is a link to download

  • @pmlstk
    @pmlstk Год назад

    put a pastebin or something for the prompts man

  • @tanyasubaBg
    @tanyasubaBg Год назад

    Amazing stuff. Unfortunately, I don't have Nvidia. So I can't try anything that you share. Do you have some suggestions for people who use an AMD card? Thank you in advance.

    • @enigmatic_e
      @enigmatic_e  Год назад +2

      Might have to go with google colab and use it through there. I want to get I to that and try to make a video for people in your situation.

    • @tanyasubaBg
      @tanyasubaBg Год назад

      @@enigmatic_e thanks it would be great

    • @judgeworks3687
      @judgeworks3687 Год назад

      This woman’s tuts are great too and she covers using runPod and how to run SD when you have old or computers (she doesn’t run SD on her computer). I don’t know if the LORA and training works but seems like it would…ruclips.net/video/--Z03wbDp_s/видео.html

    • @BrunodeSouzaLino
      @BrunodeSouzaLino Год назад +1

      SD should work with AMD cards with ROCm support and PCIe 3 atomics. Don't expect much in the way of support, as most people think CUDA is the only framework that exists.

  • @RHYTE
    @RHYTE Год назад

    why don't you use deforum for this?

    • @enigmatic_e
      @enigmatic_e  Год назад

      Does it give different results?

    • @RHYTE
      @RHYTE Год назад

      @@enigmatic_e It should give more consistancy because the last frame is fed in to generate the next. However for me it doesn't seem to work as well with controlnet at the moment.

  • @Dreamy_Downtempo
    @Dreamy_Downtempo Год назад +1

    i can't get Lora to work the installation guide is completely different now on git

  • @AI数字人
    @AI数字人 Год назад

    Is there any real-time software that can implement AI technology like this

    • @enigmatic_e
      @enigmatic_e  Год назад

      Not at the moment. Runway is getting close

    • @AI数字人
      @AI数字人 Год назад

      @@enigmatic_e Thank you very much, looking forward to a real-time tool, I think when the time comes to use live, should be very interesting

  • @musyc1009
    @musyc1009 Год назад

    How did you get multiple tabs for controlnet ??

    • @enigmatic_e
      @enigmatic_e  Год назад +1

      Go to settings and then control net and I think there’s settings to add controlsnets

    • @musyc1009
      @musyc1009 Год назад

      @@enigmatic_e got it ! thanks for the instructions , and keep up the good work with the vids, you helped A LOT

  • @BKLYNXGAMING
    @BKLYNXGAMING Год назад

    Can this work for Mac?

  • @vigamortezadventures7972
    @vigamortezadventures7972 Год назад

    i subscribed to the corridor crew and didn't go into depth as what is being said here .. not to discourage anyone but you may not find the answers you seek in the subscription..

    • @enigmatic_e
      @enigmatic_e  Год назад

      Do you mean you did the paid subscription with them?

    • @bigdaveproduction168
      @bigdaveproduction168 Год назад

      Yes I know what do you mean, now with the evolution of stable diffusion corridor's tutorial seems to be obsolete now

  • @BrunodeSouzaLino
    @BrunodeSouzaLino Год назад

    This whole workflow is beyond most budgets. I don't think most small studios or individuals have the know-how and funds to create their own AI algo specific to a curated dataset of expected results, then have enough computing power to train said dataset to satisfaction in a timely manner, record video with the correct settings and repeat a series of complicated conversion steps and cleaning on a frame by frame basis using several pieces of software until the whole process is done. It's important to note that the vast majority of artists are not technical people and know very little, if any programming, even if said programming is related to their craft. Couple that with the fact that SD is in constant development and has non-existent documentation and you have a workflow which would be slower than doing the whole thing yourself to the same level of quality (keeping in mind most of the cleaning you have to do in the outputs will be already integrated in the result by the animator).

  • @zhexiang8952
    @zhexiang8952 Год назад

    so complex😅😅

  • @aminebelahbib
    @aminebelahbib Год назад +1

    It looks bad

  • @Immortal_BP
    @Immortal_BP Год назад +1

    i cant help but feel bad for all the animators in japan who make less than minimum wage. i think they will be replaced by AI in the next 10 years