You Need This Hack To Get Consistent AI Video Using Stable Diffusion Controlnet and EBsynth | Part 1

Поделиться
HTML-код
  • Опубликовано: 27 май 2024
  • Video consistency in stable diffusion can be optimized when using control net and EBsynth. In this tutorial, I'll share two awesome tricks Tokyojap taught me and introduce a free generator for creating a grid from your images.
    It's a crucial step to achieve flicker free and consistent AI animation. Tokyojap uses the text2img tab instead of the img2img tab in stable diffusion, which allows for incredible style transfer in high resolution quality.
    He calls it the temporal consistency method. It's the best flickerfree AI video technique I've come across in 2023, made possible with stablediffusion, controlnet, and EBsynth. Get ready to take your videos to the next level with these powerful tools!
    0:00 Introducing Tokyojab's 2 hacks
    0:38 Showcasing Tokyojab's videos
    2:30 Start Tutorial
    3:36 Export video into img sequence
    5:25 Making the grid from 4 images on Sprite sheet packer website
    6:17 Settings in Stable diffusion and Control net
    7:07 The 3 civit.ai models Tokyojap uses + the installation
    7:48 The VAE (variational auto encoder) + the installation
    8:39 Prompting in Stable Diffusion
    10:17 Cutting the grid into 4 images with ezgif sprite sheet cutter
    11:01 Create the images in EBsynth
    11:57 Stitching the images together in davinci resolve 18
    13:03 What is in the 2nd Tutorial
    Links:
    -Tokyojab instagram: / tokyojab
    -Tokyojab Reddit post:
    / tips_for_temporal_stab...
    The 3 civit ai models:
    -Art & Eros
    civitai.com/models/3950/art-a...
    -Realistic-vision-v12
    civitai.com/models/4201/reali...
    -Cine Diffusion
    civitai.com/models/50000/cine...
    -Pexels girl
    www.pexels.com/video/a-person...
    Installing the vae model, on the hugging face website
    -huggingface.co/stabilityai/sd...
    Sebastian Kamph's Installation of Stable diffusion automatic 1111 webui
    - • How to Install Stable ...
    Creating the grid and cutting it:
    -www.codeandweb.com/free-sprit...
    -ezgif.com/sprite-cutter
    Rundiffusion
    -rundiffusion.com/
    EBsynth Free Software
    -ebsynth.com/
    DISCLAIMER: No copyright is claimed in this video and to the extent that material may appear to be infringed, I assert that such alleged infringement is permissible under fair use principles. If you believe material has been used in an unauthorized manner, please contact the poster.
  • ХоббиХобби

Комментарии • 467

  • @Amazombe
    @Amazombe 7 месяцев назад +5

    It's CRAZY stuff to stand out. Like and Subscribe. Thanks to you and the inventor!!!

    • @digital_magic
      @digital_magic  7 месяцев назад

      i am glad you liked the video 🙂

  • @latent-broadcasting
    @latent-broadcasting 10 месяцев назад +1

    Amazing tutorial! Thanks so much!

  • @YAHURDME
    @YAHURDME 10 месяцев назад +1

    Bless you and him for sharing this knowledge!

    • @digital_magic
      @digital_magic  10 месяцев назад

      Yes, Tokyojab is a legend :-)
      I am glad you liked it :-)I am working on the 2nd tutorial now. I guess it will take 2 weeks i am afraid, as i am suffering from a immune illnes at the moment, my joints and tendants in both elbows and shoulders are inflamated. I can only work maximal 2 hours on the computer per day at the moment.

  • @infiniteshowrooms
    @infiniteshowrooms 11 месяцев назад +18

    ...yeah, we're going to need that part 2, asap.
    Incredible!🎉

    • @digital_magic
      @digital_magic  11 месяцев назад +8

      i am glad you like it too 🙂 I will start creating part 2 tomorrow.

    • @infiniteshowrooms
      @infiniteshowrooms 11 месяцев назад +2

      @@digital_magic awesome. Since in talks with Tokyojab still please suggest he start a patreon or something for his continual learning. I’m sure he’ll keep pushing the AI video capabilities (especially with GAN video models coming soon) and I’d love to pay some subscription for all his newest test learnings!

    • @digital_magic
      @digital_magic  11 месяцев назад +2

      @@infiniteshowrooms yeah he is so helpful, he is a truly great guy 🙂I will let him know your message and suggest it to him.

    • @Teebyteebs
      @Teebyteebs 10 месяцев назад +1

      @@digital_magic yes, we need this! Thank you!!!!!

    • @ariftagunawan
      @ariftagunawan 10 месяцев назад +1

      I'm waiting for it too Master

  • @PHATTrocadopelus
    @PHATTrocadopelus 11 месяцев назад +7

    This is amazing!! Looking forward to part 2!!

    • @digital_magic
      @digital_magic  11 месяцев назад +2

      Yeah Tokyojab's method is amazing, i am glad you like it too 🙂 I will start creating part 2 next week.

    • @shailendrarathore445
      @shailendrarathore445 11 месяцев назад +1

      @@digital_magic yes we can do just waiting for the right time to be updated thanks by the way for the good explanation but need more to be detailed to clear or myths and concept about img 2 img as you used txt 2 img

  • @giantbee9763
    @giantbee9763 10 месяцев назад +1

    Thank you + Tokiojab for sharing!!!

    • @digital_magic
      @digital_magic  10 месяцев назад +1

      I am glad you liked it :-)I am working on the 2nd tutorial now. I guess it will take about 10 days, i am afraid, as i am suffering from a immune illnes at the moment, my joints and tendants in both elbows and shoulders are inflamated. I can only work maximal 2 hours on the computer per day at the moment.

  • @ElHongoVerde
    @ElHongoVerde 10 месяцев назад +3

    5:55 YAY! Seb is the best.
    Amazing tutorial, mate. You got me.

  • @oriisking
    @oriisking 10 месяцев назад +2

    Amazing. I'll be waiting for the second one ❤

    • @digital_magic
      @digital_magic  10 месяцев назад

      I am glad you liked it :-)I will start working on the 2nd tutorial today. I guess it will take 2 weeks i am afraid, as i am suffering from a immune illnes at the moment, my joints and tendants in both elbows and shoulders are inflamated. I can only work maximal 2 hours on the computer per day at the moment.

    • @oriisking
      @oriisking 10 месяцев назад

      @@digital_magic wish you all the best buddy ❤️ get some good rest

  • @GerardoValerio
    @GerardoValerio 11 месяцев назад +4

    ❤ whoa! Tokyojab is a genius! And you sir are amazing for breaking this down so easily to make! Thank you so much for sharing this knowledge 🙏

    • @digital_magic
      @digital_magic  11 месяцев назад +1

      You're so welcome! I am glad you liked it :-)I will start working on the 2nd tutorial tomorrow. I guess it will take 2 weeks i am afraid, as i am suffering from a immune illnes at the moment, my joints and tendants in both elbows and shoulders are inflamated. I can only work maximal 2 hours on the computer per day at the moment. And thanx for you great comment :-)

    • @digital_magic
      @digital_magic  11 месяцев назад +1

      and yes Tokyjab is a true genius, and he is helping me so much to make these tutorials. i owe him a lot 🙂

    • @GerardoValerio
      @GerardoValerio 10 месяцев назад +1

      @@digital_magic I wish you well amigo 🙏 I’m sorry to hear about your health situation and I hope you get better soon. The only thing I can suggest to fight this since I have crohns, is eat as much anti-inflammatory foods as possible and be careful of processed foods. I truly wish you well 🙏

    • @digital_magic
      @digital_magic  10 месяцев назад +1

      @@GerardoValerio Thanx Mate 🙂 I am also taking care with the processed foods. Do you have any suggestions for anti-inflammatory foods?

    • @GerardoValerio
      @GerardoValerio 10 месяцев назад +1

      @@digital_magic yes, chayote and asparagus are a winning team. Soups I would cut some chayote with potatoes, meat(fish,chicken,or beef) spinach and garlic. For hot plates I would do mashed sweet potatoes 🍠 and steamed asparagus with a little bit of spices for flavor. Adding turmeric to the dishes help a lot too. I also do veggie protein shakes with different fruits like shaved apples, pineapple, and banana with raspberries, ginger and blueberries. Milk wise I use coconut milk as cow milk actually can hurt the gut. So most often I use coconut milk for my protein milkshakes too.

  • @Failed_Society_Real
    @Failed_Society_Real 10 месяцев назад +1

    Your prpduction is fantastic
    Im learning a lot from you 😌

    • @digital_magic
      @digital_magic  10 месяцев назад +1

      I am glad you liked it 🙂 Working on the last bits today :-) and around 17:00 european time the 2nd tutorial will go online :-)

    • @BadBanana
      @BadBanana 10 месяцев назад

      ​@digital_magic very eager now sir 🙏🙏

  • @davewaldmancreative
    @davewaldmancreative 6 месяцев назад +1

    Thank you so much. Dave

  • @Fahnder99
    @Fahnder99 10 месяцев назад +1

    Thank you!
    It's also getting more obvious where the limits of AI imaging will be.

    • @digital_magic
      @digital_magic  10 месяцев назад +1

      yes you are right with that. But it is still amazing to play around with

  • @bunnystrasse
    @bunnystrasse 10 месяцев назад +1

    This is awesome!

    • @digital_magic
      @digital_magic  10 месяцев назад +1

      I am glad you liked it :-)I am working on the 2nd tutorial now. I guess it will take 2 weeks i am afraid, as i am suffering from a immune illnes at the moment, my joints and tendants in both elbows and shoulders are inflamated. I can only work maximal 2 hours on the computer per day at the moment.

  • @user-nv3gh6ll7e
    @user-nv3gh6ll7e 10 месяцев назад +1

    this is totally awesome, thank you

  • @sukhpalsukh3511
    @sukhpalsukh3511 11 месяцев назад +2

    Great, waiting for part 2

    • @digital_magic
      @digital_magic  11 месяцев назад +1

      I am glad you liked it. I will start working on the 2nd tutorial tomorrow. I guess it will take 2 weeks i am afraid, as i am suffering from a immune illnes at the moment, my joints and tendants in both elbows and shoulders are inflamated. I can only work maximal 2 hours on the computer per day at the moment.

    • @sukhpalsukh3511
      @sukhpalsukh3511 11 месяцев назад

      @@digital_magic hope get well soon, health first, take as much time as u want 😊

  • @PriNovaFX
    @PriNovaFX 11 месяцев назад +18

    Take the automatic1111 stable diffusion roop extension for nearly perfect face swaps. This works also for grid images. All you have to do is to input into the control face option "0, 1, 2, 3" to face swap all four faces. This works great.
    Thank you for this video

    • @digital_magic
      @digital_magic  11 месяцев назад +6

      Thanks for your great comment. I was thinking about doing something like this and make a tutorial about it but didn't know about the "0, 1, 2, 3" option. Thanks for the tip hopefully I find time to make a tutorial about it

    • @eliisrael4112
      @eliisrael4112 10 месяцев назад +1

      Sounds very interesting , do you have more info on this workflow?

    • @chrisbraeuer9476
      @chrisbraeuer9476 10 месяцев назад

      I also thought about using Roop for the faces. But somehow they always look like perfect masks that are above the real face. You know what I mean? They stick out to much most of the time.

    • @littlered6340
      @littlered6340 10 месяцев назад

      @@chrisbraeuer9476 That's interesting, does that also happen if you use the 'swap in source image' instead of 'swap in generated image' option? (I don't actually know what they do since I've never tinkered with it. Guess I'll do that now!)

    • @chrisbraeuer9476
      @chrisbraeuer9476 10 месяцев назад +1

      @littlered6340 you can get really good results in roop. But only for the face. But for turns and if things cover the face, even for a short time it becomes visible.
      Atm i have several routs but none of them gives perfect results. There are always little pieces that dont work 100%.
      That ebsynth method gave me the clearest and best quality results so far.
      But I am sure if one would put enough effort into it( cutting backgrounds and changing stuff by hand I could get it perfect. But that's not an option since it would take to long. Yesterday's video failed because my initial starting video was a bad choise. I tried to bulk remove the backgrounds but around 150 frames needed edit. Her trousers, the background and a table nearby had next to the same color. So bg removal got confused. But thats fixable.
      The character itself, sharpness and fluid movement was there. Just the seams were visible. Need to play around with ebsynth settings more. It was the first time yesterday that I used it.
      I used txt2img with controlnet.

  • @chrisbraeuer9476
    @chrisbraeuer9476 10 месяцев назад +2

    Great channel!

    • @digital_magic
      @digital_magic  10 месяцев назад

      Glad you enjoy it!

    • @chrisbraeuer9476
      @chrisbraeuer9476 10 месяцев назад

      @@digital_magic In depth tutorials are always appreciated. There are just so many things... Thank you.

  • @ThomasLennon1
    @ThomasLennon1 11 месяцев назад +1

    What a brilliant video, been following Tokyojab showcasing but the written guide wasn't as intuitive as the video. Looking forward to part 2!

    • @digital_magic
      @digital_magic  11 месяцев назад +2

      Thanx, that is so great to hear. Tokyojab is amazing and he is helping me (through emails) almost everday, to make the tutorials as best as possible. I will start working on the 2nd tutorial tomorrow. I guess it will take 2 weeks i am afraid, as i am suffering from a immune illnes at the moment, my joints and tendants in both elbows and shoulders are inflamated. I can only work maximal 2 hours on the computer per day at the moment.

  • @kritikusi-666
    @kritikusi-666 10 месяцев назад +9

    This is by far the best walkthrough guide I have come across. Good job my guy. Awesome content.

    • @digital_magic
      @digital_magic  10 месяцев назад +1

      I am glad you liked it :-) and am delighted by your comment :-) I am working on the 2nd tutorial now. I guess it will take 2 weeks i am afraid, as i am suffering from a immune illnes at the moment, my joints and tendants in both elbows and shoulders are inflamated. I can only work maximal 2 hours on the computer per day at the moment.

    • @vox_seven
      @vox_seven 10 месяцев назад +1

      this really changes everything, thanks

    • @digital_magic
      @digital_magic  10 месяцев назад

      @@vox_seven i am glad you liked it 🙂

  • @DJHUNTERELDEBASTADOR
    @DJHUNTERELDEBASTADOR 11 месяцев назад +2

    Muchas gracias amigo por compartir estos conocimientos.. nuevo seguidor saludos desde Bolivia

    • @digital_magic
      @digital_magic  11 месяцев назад

      me alegra que te haya gustado el video. Y feliz de ver un nuevo suscriptor. Voy a empezar a trabajar en el segundo tutorial mañana.

  • @resetmatrix
    @resetmatrix 11 месяцев назад

    Great tutorial, we need second part please!

    • @digital_magic
      @digital_magic  11 месяцев назад +1

      I am glad you liked it :-) I will start working on the 2nd tutorial today. I guess it will take 2 weeks i am afraid, as i am suffering from a immune illnes at the moment, my joints and tendants in both elbows and shoulders are inflamated. I can only work maximal 2 hours on the computer per day at the moment.

  • @831digital
    @831digital 11 месяцев назад +1

    Great job! We need the second tutorial now.

    • @digital_magic
      @digital_magic  11 месяцев назад

      Thanx i am glad you liked it :-) Hahahahah,...yeah i wish that was ready already as well 🙂 I guess it will take 2 weeks i am afraid, as i am suffering from a immune illnes at the moment, my joints and tendants in both elbows and shoulders are inflamated. I can only work maximal 2 hours on the computer per day at the moment 😞But i am going to start working on it on monday....

    • @831digital
      @831digital 11 месяцев назад +1

      @@digital_magic Sorry to hear. Hope you feel better soon. I had another quick question. Can you talk a little bit more about how you dealt with the blinks? Eye blinking is causing me a big headache in ebsynth right now. Thanks and best wishes for a speedy recovery!

    • @digital_magic
      @digital_magic  11 месяцев назад +1

      @@831digital Thanx , my homeopathic doctor said that it will get better within 5 months 🙂I guess i was a bit lucky with the eyes, cause at frame 11 it went wrong in the 1st keyframe. And then in the 2nd keyframe i was lucky that frame 12 started with a good shot from the eye. But in general i think it matters how close your keyframes are to each other. I would suggest to start with a short sequence maximal 100 frames, to start to learn the method. What you also could do is, to choose a new keyframe , there where the eye is open again. Something else what matters is, if the head turns, then EBsynth can loose track.Hope this helps 🙂

    • @ccl1195
      @ccl1195 10 месяцев назад

      @@digital_magic Hmm I had a comment here that I think got auto-removed. Just inquiring if you had heard of Wim Hof method re. immune illnesses. Cheers and thanks for the great video.

  • @atdfilms360
    @atdfilms360 10 месяцев назад +1

    Thank you for posting this breakdown of his workflow! Did you ever post part 2? Couldn't see it on your uploads etc? Thank you

    • @digital_magic
      @digital_magic  10 месяцев назад

      Working on the last bits today :-) and around 17:00 european time it will go online :-)

    • @atdfilms360
      @atdfilms360 10 месяцев назад

      @@digital_magic Oh amazing!!! Look forward to seeing it. I'm working on something that this would be super useful for. I'll set an alarm for then!!! Thank you!!!

  • @lincolnrenall
    @lincolnrenall 10 месяцев назад +3

    looking forward to video 2 also...still experimenting with this method, got some OK result with 1 frame but a little tricky getting the 2x2 frames to diffuse without border changes...and still experimenting with amount of keyframes to avoid blurry interpolation between....but hey its an interesting approach looking forward to next video

    • @digital_magic
      @digital_magic  10 месяцев назад

      Thanx for your kind comment. Working on the last bits today :-) and around 17:00 european time it will go online :-)

  • @RoockeeR
    @RoockeeR 10 месяцев назад +1

    Thanx bro

  • @judymyers1934
    @judymyers1934 10 месяцев назад +106

    These AI tools are getting so crazy, that if you don't use them to your own advantage, you will be left so far behind everyone else. To be fair, it's really good for content creators (and even better if they use it along with Famester). Create some content, and make it go popular straight away, I like that.

    • @digital_magic
      @digital_magic  10 месяцев назад

      thanx for your comment, what is Famester?

    • @waynemcckn
      @waynemcckn 10 месяцев назад

      @@digital_magic I think this is a spam comment with fake upvotes.

  • @wonderwomen4065
    @wonderwomen4065 11 месяцев назад +1

    Very helpful

    • @digital_magic
      @digital_magic  11 месяцев назад +1

      thanx i am glad you liked it 🙂

  • @Pauluz_The_Web_Gnome
    @Pauluz_The_Web_Gnome 10 месяцев назад +1

    Unbieliefubul man ferrie greejt fiediejoow! :D

    • @digital_magic
      @digital_magic  10 месяцев назад +1

      I am glad you liked it :-)I am working on the 2nd tutorial now. I guess it will take 2 weeks i am afraid, as i am suffering from a immune illnes at the moment, my joints and tendants in both elbows and shoulders are inflamated. I can only work maximal 2 hours on the computer per day at the moment.

    • @Pauluz_The_Web_Gnome
      @Pauluz_The_Web_Gnome 10 месяцев назад +1

      @@digital_magic Sorry 2 hear man...Get better soon...All the best!

  • @KaiK4Isa
    @KaiK4Isa 7 месяцев назад +1

    i was with a deadly urge, i wished to animate while on seaart, thank you for the video, omg feels like that animation of the erosion bird

    • @digital_magic
      @digital_magic  7 месяцев назад

      Thanx i am glad you liked it, Tokyojab is an amazing artisit, who always find ways to get amazing results.

  • @elifmiami
    @elifmiami Месяц назад

    This is amazing work ! I just was wondering how about background Is that possible to change with this method ?

  • @nightmisterio
    @nightmisterio 10 месяцев назад +1

    WOOOOW!!!!

    • @digital_magic
      @digital_magic  10 месяцев назад +1

      I am glad you liked it :-)I am working on the 2nd tutorial now. I guess it will take 2 weeks i am afraid, as i am suffering from a immune illnes at the moment, my joints and tendants in both elbows and shoulders are inflamated. I can only work maximal 2 hours on the computer per day at the moment.

  • @dctrex
    @dctrex 10 месяцев назад

    Wow, great tutorial, this is very generous of you! Do you know if Tokyojab himself has a RUclips Channel?

    • @digital_magic
      @digital_magic  10 месяцев назад

      yes he has, @THEJABTHEJAB He didn't want me to mention it in the video cuz he just uses it as a dumping place for his videos. And I'm thankful for your great comment but it's not very generous of me but it is very generous from him that he shared everything with people on Reddit and also with me he has been very helpful in creating this tutorial I owe him a lot

  • @digital_magic
    @digital_magic  11 месяцев назад +23

    I want to APOLOGIZE for something. A friendly viewer of the above video reminded me of the following:
    -In English, there is a HUGE difference between referring to a non-white person as "person of color" vs "colored person". "Person of color" (PoC) is the preferred terminology. "Colored person" is an extremely racist term popularized in pre-1970s America and is found extremely offensive in modern English-speaking culture.
    I wasn't aware of that and i feel really bad that i used the word colored person :-( This wasn't intentional, I thought i was doing the right thing and in this way i wasn't discriminating anybody. If anybody who has watched this video feels discriminated by me then i am very sorry for that. I am not a racist at all, i worked on a school for migration children and i have good feelings to all people all over the world.

    • @infiniteshowrooms
      @infiniteshowrooms 11 месяцев назад +2

      It's okay, we understand. 💙

    • @digital_magic
      @digital_magic  11 месяцев назад +1

      @@infiniteshowrooms I am glad to hear that 🙂

    • @asksterling
      @asksterling 11 месяцев назад +8

      Don't worry it's an easy fix and you already did it. This is why it is so important for us (Yes I'm a Black American) to stop allowing all these new terms for our ethnicities. We're all human and there really are no so-called "races". If I'm a Doberman and you're a German Shepard aren't we both dogs?
      Many people were indoctrinated into this line of "racial-thinking" but there is no real biological science to back it up at all. I grew up in New York City as a kid and James Brown still sings "Say It Loud, I'm Black and I'm Proud!" So how am I now African American and I haven't even been to Africa yet? (But I plan to visit in a few months. From what I can see, Elon Musk and Charlize Theron are really "African American" but there is no pathway for them to say that either.
      Person of Color is ok I guess but even when I hear that I look for Blue Smurfs or a Yellow Bart Simpson. The US has everyone totally confused and how come South America is called that and not America. And how come professional American baseball claims to crown the "World Champions" but the only other teams are in Canada? Makes absolutely no sense at all. And don't get me started on American Football where they rarely even kick the ball LOL!
      Sorry to belabor the point but I wanted you to know that we all understand you loud and clear! We're simply just Black people that will carry universally. Thank You @digital_magic

    • @luminousdragon
      @luminousdragon 11 месяцев назад +2

      I assumed as much. When I heard the term it was slightly jarring, but given your accent I was 99% sure it was just a translation error. All good!

    • @digital_magic
      @digital_magic  11 месяцев назад

      @@asksterling Thanx for your great comment, i totally agree with you :-)

  • @user-yn5us5eh7d
    @user-yn5us5eh7d 8 месяцев назад +1

    thank u

  • @tazlo4173
    @tazlo4173 10 месяцев назад +2

    *Netflix adaptations gonna got a next level*

  • @Ich.kack.mir.in.dieHos
    @Ich.kack.mir.in.dieHos 10 месяцев назад +1

    Insane tutorial from such a humble artist !! I ve a question regarding using this technique for dancing characters like ballet for example. As there is a lot of different poses and movement going on you think this technique still makes sense or are more comolex movements still working better with a mix of warp fusion and controlnet???

    • @digital_magic
      @digital_magic  10 месяцев назад +1

      Hey there, thanks for your nice comment. Tokyojab has created a video where a girl's dancing. it is also in this video. so it does work if you have the right settings, prompts.
      but in all honesty this technique sometimes struggles with a lot of movements. I would suggest to definitely use the depth map in control net. which I show how to use in the second tutorial. I'm also working on some new techniques now with decorum, to create videos like people create them with warp diffusion. as soon as that is finished I will also going to try to create more realistic styles using this technique. but I don't know 100% sure if it will work. as in anything with AI it is just a lot of trial and error. that's why I think it's clever to start with a 512 x 512 and not too many keyframes, so you save time with rendering.

  • @microsoftplus9366
    @microsoftplus9366 11 месяцев назад +4

    Great tutorial. Looking very forward to Part 2 :) Knowing how to use TiledVAE sounds very helpful as someone who doesn't have the best GPU.

    • @digital_magic
      @digital_magic  11 месяцев назад

      Yes it is very helpfull, i also have 8gb vram at the moment. I will start working on the 2nd tutorial tomorrow. I guess it will take 2 weeks i am afraid, as i am suffering from a immune illnes at the moment, my joints and tendants in both elbows and shoulders are inflamated. I can only work maximal 2 hours on the computer per day at the moment.

    • @microsoftplus9366
      @microsoftplus9366 11 месяцев назад

      @@digital_magic Oh that's okay, we understand! Hope you are feeling better soon! Take care of yourself first and always, will be here :)

  • @inbox0000
    @inbox0000 10 месяцев назад +1

    This is great thanks, what is in your original 512 folder you put in ebsynth? The Images before they were done in img2img in SD?

    • @digital_magic
      @digital_magic  10 месяцев назад

      Thanks I am glad you like the video. yes In my original 512 folder, are the images that I extracted as an image sequence. so in other words the original Video. so my video was about 90 images, and those images are in that folder. I hope this helps and I wish you a nice day

  • @RaphaSpyker
    @RaphaSpyker 11 месяцев назад +1

    Hey there! First of all, thank you for the video! I’d like to ask if you have the tutorial written down somewhere?
    Thanks again.

    • @digital_magic
      @digital_magic  11 месяцев назад

      Sorry no only in my computer as a storyboard, But in RUclips you can download the script. just searching it on RUclips and you'll find how to do it it is very easy

  • @lukewilliams7020
    @lukewilliams7020 11 месяцев назад +1

    Great video! Well put together. Will this process work with 16:9 ratio images if you have a more powerful gpu?

    • @digital_magic
      @digital_magic  11 месяцев назад

      Thanx i am glad you liked the video 🙂 It was a lot of work to create it and therefore i really appreciate your comment 🙂 And yes i think that this will also work with 16:9 images if you have a GPU with a lot of VRAM. However i would recommend to start with 512x512, cause then rendering times stay low in the learning fase. This is something that Tokyojab also mentions in his Reddit post. Have a nice weekend 🙂

    • @lukewilliams7020
      @lukewilliams7020 11 месяцев назад +1

      @@digital_magic thanks for the reply. Oh you can totally see the amount of work put into this. It’s really well produced content👍🏼

    • @digital_magic
      @digital_magic  11 месяцев назад +1

      @@lukewilliams7020 Thanx again. Please let me know if you get good results with the 16:9 ratio images. Maybe send a link if you have produced a video with this method. I am also very new to this technique and i think we can all learn from each other.

  • @florent625
    @florent625 8 месяцев назад +2

    Hey, great tutorial thanks ! I'm having an issue with the part at 11:57 where you import the 4 outputs folder in DaVinci (I have free version), it just imports all the frames into the media tab but I don't have that "video timeline" that you do have to work on it. Do you have any idea if I'm missing something ?

    • @florent625
      @florent625 8 месяцев назад

      Nvm I saw you posted solution under another comment as link, just going to bump it : ruclips.net/video/9YH6vilGFD4/видео.html

    • @digital_magic
      @digital_magic  8 месяцев назад +1

      Yes I know exactly what you mean and I know a tutorial which shows you how to fix this. it is just a simple setting you need to change so that resolve can import the images as a sequence. I will look it up on RUclips and send you the link soon

    • @digital_magic
      @digital_magic  8 месяцев назад +1

      This is the link for importing sequences into resolve
      ruclips.net/video/e9VIR39aCHw/видео.html

  • @SilverIlly3
    @SilverIlly3 11 месяцев назад +1

    thanks for the tutorial.. What's the best approach for creating video realistic animation from scratch with no reference video or no keyframes? Just text to image for example

    • @digital_magic
      @digital_magic  11 месяцев назад +1

      Hello and thanks for your comment, I'm not really sure. maybe try did software or Runway ML’s gen 1 or Gen 2. with stable diffusion I'm not sure if that is possible what you want to
      do.

    • @SilverIlly3
      @SilverIlly3 10 месяцев назад +1

      @@digital_magic okay thanks

    • @stephenwalsh2213
      @stephenwalsh2213 10 месяцев назад +1

      Deforum is a good starting point

  • @mostafamostafa-fi7kr
    @mostafamostafa-fi7kr 10 месяцев назад +1

    i already want part 2 , plsssssss

    • @digital_magic
      @digital_magic  10 месяцев назад +1

      I am glad you liked it :-)I am working on the 2nd tutorial now. I guess it will take 2 weeks i am afraid, as i am suffering from a immune illnes at the moment, my joints and tendants in both elbows and shoulders are inflamated. I can only work maximal 2 hours on the computer per day at the moment.

    • @mostafamostafa-fi7kr
      @mostafamostafa-fi7kr 10 месяцев назад

      @@digital_magic I sincerely hope that your immune illness improves soon, allowing you to recover fully. Your dedication and perseverance, despite the challenges you're facing, truly inspire me. Take care and prioritize your health during this time

  • @deepfaceon
    @deepfaceon 10 месяцев назад

    Nice job. Kindly share the second part. I have worked on a similar video in my channel. instead of using stable diffusion, I used faceswap method to get realistic picture. However, for a full body style it will not apply. I would like to know if there is possibility to use midjourney picture and get an exact pose to a reference image

    • @digital_magic
      @digital_magic  10 месяцев назад +1

      I am glad you liked it :-)I am working on the 2nd tutorial now. I guess it will take about 2-3 days, i am afraid, as i am suffering from a immune illnes at the moment, my joints and tendants in both elbows and shoulders are inflamated. I can only work maximal 2 hours on the computer per day at the moment.

    • @digital_magic
      @digital_magic  10 месяцев назад +1

      And about mid Journey, at the moment I don't think there's this possibility. but I guess that will come in future for sure.

    • @deepfaceon
      @deepfaceon 10 месяцев назад +1

      @@digital_magic Thanks for your response. hope you get well soon.

  • @robertocarlos3819
    @robertocarlos3819 10 месяцев назад +1

    Hey, bro. Thanks for the tutorial! I have a question. We can use this workflow for animated, stylized, anime art? Or its just for realistic art?

    • @digital_magic
      @digital_magic  10 месяцев назад +1

      Yes it works fine as well for animated stylized anime art.

    • @Robertmillsjr
      @Robertmillsjr 10 месяцев назад

      @@digital_magic thanks for the video! I subscribed. Could you include a quick step to do animation in your new video? I think a lot of people are looking for animation style.

  • @rohyts
    @rohyts 10 месяцев назад +1

    Hi, In Davinci Resolve, after running the saver, I get 240 frames as output but you have only 75 in the 10 second video. Could you please tell me if I am missing something?

    • @digital_magic
      @digital_magic  10 месяцев назад +1

      My video is not 10 seconds, so don't worry about it. if your video is 10 seconds then 250 frames is perfect. wish you good luck with creating and enjoy it. always feel free to ask any more questions

  • @Fusive
    @Fusive 10 месяцев назад

    Where do you get the model you select at 6:55 called control_v11p_sd15_lineart?

    • @digital_magic
      @digital_magic  10 месяцев назад

      huggingface.co/lllyasviel/ControlNet-v1-1/tree/main
      Also make sure to update the extension in settings in automatic 1111

    • @Fusive
      @Fusive 10 месяцев назад +1

      @@digital_magic Thanks!

    • @digital_magic
      @digital_magic  10 месяцев назад

      @@Fusive your welcome 🙂

  • @eli7111
    @eli7111 10 месяцев назад +2

    Part 2 Pleeeeeeese !

    • @digital_magic
      @digital_magic  10 месяцев назад

      I am glad you liked it :-)I am working on the 2nd tutorial now. I guess it will take about 5-10 days, i am afraid, as i am suffering from a immune illnes at the moment, my joints and tendants in both elbows and shoulders are inflamated. I can only work maximal 2 hours on the computer per day at the moment.

    • @eli7111
      @eli7111 10 месяцев назад +1

      @@digital_magic
      oh no i hope you feel better soon I'm just curious how it works with the masks! but health comes first!

    • @eli7111
      @eli7111 10 месяцев назад

      @@digital_magic And thaks for the great worke !

    • @digital_magic
      @digital_magic  10 месяцев назад

      @@eli7111 i am working on the video from mo-till friday 🙂Maybe friday it will come out, but i am not 100% sure...

  • @Sfuentez098
    @Sfuentez098 10 месяцев назад

    Hi!, love the videos, i have a question.... what can i do if Ebysinth its just giving me "Ugly" results?, it looks like its melting, its just not working for me.

    • @digital_magic
      @digital_magic  9 месяцев назад

      Probably if EBsynth, is giving you ugly results then there's too much motion in your video. or you didn't use enough keyframes. unfortunately this technique is not very good at fast motions then it's better to use a technique like I showed in my last video using deforum. but this is a bit less consistent, but it normally doesn't give ugly results but nice results to look at although that they aren't 100% consistent

  • @thays182
    @thays182 8 месяцев назад +1

    Do you have a discord or steam for connecting with other ai enthusiasts? I'm trying to get this to work for my application (converting in-game video of characters to realistic versions and translating this to animations). The results I'm getting are terrible as i think this tutorial is really only specific for faces... How could a wider frame shot work? like a full body talking down a road be applied to this method? Does the control net method employed here simply not work? I loose 80% of my detail.

    • @digital_magic
      @digital_magic  8 месяцев назад

      Hey there and sorry I don't have a Discord group.

    • @digital_magic
      @digital_magic  8 месяцев назад

      If you want to change a full body then you should try the deforum Extension which I made my last for tutorials about. it is a very interesting technique and the results are getting better and better

  • @wasdadan
    @wasdadan 10 месяцев назад +1

    I'm kinda new to AI but in the past months I've had much better results just using the EBsynth extension. Messed up eye and mouth and still lots of flickering from a vid which almost has no motion. And all you did was changing the color (can I still say color?).
    And why exactly did I have to download 3 models? Anyway, thanks a lot for your time and effort

    • @digital_magic
      @digital_magic  10 месяцев назад

      i just wanted to show the whole working process from tokyojab. And yes this method also has it's limitations, i totally agree with that :-)

  • @davewaldmancreative
    @davewaldmancreative 4 месяца назад +1

    I'm back. Thanks again for this. i have a question as i'm now super stuck and would be so grateful for help :-) when i txt to img with my grid the result image doesn't see the grid. its like its not seeing controlnet. have you come across this? I'm guessing theres a box somewhere deep in settings i've missed. thanks again! dave

    • @digital_magic
      @digital_magic  4 месяца назад

      have you updated controlnet? And updated auto1111. There is not much else i could advice you, since i can't see your screen...

    • @davewaldmancreative
      @davewaldmancreative 4 месяца назад +1

      Thanks again for replying. So I'm def. no coder. i looked in the cmd and it told me this:
      Launching Web UI with arguments: --ckpt-dir D:\AUTOMATIC1111\SDMODELS\models\Stable-diffusion
      no module 'xformers'. Processing without...
      Could that be the problem? I'm based in Amsterdam. Just in case that's a dutch accent. :-) Thanks again.

    • @digital_magic
      @digital_magic  4 месяца назад

      @@davewaldmancreative Yes i am dutch 🙂 Yeah i think that could be the problem. I would suggest that you ask in one of the stable diffusion discord groups like the deforum group for example. Here is the link:
      discord.com/invite/deforum
      or ask in the reddit stable diffusion group:
      www.reddit.com/r/StableDiffusion/

    • @davewaldmancreative
      @davewaldmancreative 4 месяца назад

      Thanks so much. yes. i've loaded webui but i think your video said sdxl. I'll try the groups. really appreciate your help.
      @@digital_magic

  • @DealingWithAB
    @DealingWithAB 8 месяцев назад

    with the introduction of "Realistic Vision V5.1" id assume this be the choice is or still 2.0 the way to go?

    • @digital_magic
      @digital_magic  8 месяцев назад +1

      Yeah you're right always try the newer models. that's the problem with tutorials, information gets outdated

  • @thays182
    @thays182 8 месяцев назад +1

    Do you have a tutorial on how to do this with DaVinci with a free version from the beginning? I"m not familiar with the software and your initial steps in davinci are hard to follow/duplicate on teh free version.... Where can I go to get this portion sorted out? Thanks for the amazing videos!

    • @digital_magic
      @digital_magic  8 месяцев назад

      Sorry I have no tutorial about that in-depth with the DaVinci free version

  • @NaveenKumar-up6fi
    @NaveenKumar-up6fi 9 месяцев назад

    Hi, my sequence created 240 images, Ebsynth is saying missing files. I researched about it. Initially my dimensions were wrong. There was something wrong with the Spirte Cutter so had to slice the images using Photoshop, which resulted in the dimensions of 256pxX256px, i had to recorrect it to 512px * 512px manually. Should i try shortening the frames to 74 and try, because ebsynth is not recognizing the size

    • @digital_magic
      @digital_magic  9 месяцев назад +1

      In the spritecutter make sure to disable padding, because this creates an extra pixel if you leave it enabled

  • @shaynehunter6160
    @shaynehunter6160 10 месяцев назад

    does rundeffusion (the website version) already have all the needed models?

    • @digital_magic
      @digital_magic  10 месяцев назад

      i am not sure, but i think they upadate to everything, but the technique also works with other models :-)

    • @shaynehunter6160
      @shaynehunter6160 10 месяцев назад

      @@digital_magic Hey thanks, yeah it does work with other models and for anyone else you can upload those models (or rather download it to the server, it's a bit complicated but if your attempting any of this you can do it, but you need to be a club member which is about $30 a month)

  • @microsoftplus9366
    @microsoftplus9366 10 месяцев назад +1

    I'm confused, I've got 30 keyframes and I drag the 30 keyframe folder into Ebsynth and it's not adding anything to Stop: on the bottom section when I do and then it says I'm missing 0001 keyframe when I try to click generate but my first frame is 0000?

    • @digital_magic
      @digital_magic  10 месяцев назад

      EBsynth only allows 22 keyframes. Did you rename all the keyframes properly? And please let me know your result, maybe you can send me a link?

  • @Adem.940
    @Adem.940 10 месяцев назад +1

    bro I am still waiting for the second part, and please make a tutorial for Davinci too, thanks!

    • @digital_magic
      @digital_magic  10 месяцев назад

      I am glad you liked it :-) I am working on the 2nd tutorial now. I guess it will still take about 10 days, i am afraid, as i am suffering from a immune illnes at the moment, my joints and tendants in both elbows and shoulders are inflamated. I can only work maximal 2 hours on the computer per day at the moment.
      What kind of davinci tutorials you mean?

    • @Adem.940
      @Adem.940 10 месяцев назад

      @@digital_magic get well soon, about davinci I was trying to learn how to extract frames and stuff but I managed to do it after searching on the net :)

    • @digital_magic
      @digital_magic  10 месяцев назад

      @@Adem.940 Thanx Mate 🙂

  • @Bsprouts
    @Bsprouts 9 месяцев назад +1

    I follow these instructions but don't see the lineart model after choosing lineart realistic in pre-processor - any suggestions? I want to try this so bad

    • @digital_magic
      @digital_magic  9 месяцев назад

      This is the one you should see in the model: control_v11p_sd15_lineart [43d4be0d]
      if it is not there then you should probably update control net extension and stable diffusion

  • @AIExplor.e
    @AIExplor.e 9 месяцев назад +1

    Quick Question.
    My computer doesn’t allow me to create 1024/1024 in Stable diffusion and if I have 512 png from the grid and 900/900 from stable diffusion Its creating a problem in absynth and it says that the resolution is different.
    What if I make 450/450(not 512) in davinci and then make a grid.
    After this it’s generating an image using 900/900 (not 1024) in Stable diffusion.
    Could this work as well in Absynth ?
    Or it’s only 512 images ?
    Hopefully it makes sense for you and sorry for my English

    • @digital_magic
      @digital_magic  9 месяцев назад

      i guess so, but why not stay with 512
      and then upscale later?

    • @AIExplor.e
      @AIExplor.e 9 месяцев назад +1

      @@digital_magic because the image it’s not as clear as I want.
      I want to create a project, where I’m talking and I don’t want to much flickering.

    • @digital_magic
      @digital_magic  9 месяцев назад +1

      sounds awesome, maybe you can send me the end result later...@@AIExplor.e

  • @shailendrarathore445
    @shailendrarathore445 11 месяцев назад +1

    Can we do this process using google colab step by step
    The loved the way you explain but i cannot install it locally..

    • @digital_magic
      @digital_magic  11 месяцев назад +1

      I am not sure if you can do this with Google colab, I have never worked with that before. I am sorry that I can't help you with this

    • @shailendrarathore445
      @shailendrarathore445 9 месяцев назад

      As i am trying to run Stable Diffusion on Google colab
      Please let me know what are the all models are required initially..

  • @ourcyberheaven2467
    @ourcyberheaven2467 9 месяцев назад +1

    one question, whenever I try to do the tile technique for generating, the images end up very wacky even with the same settings and higher number of steps. What is a solution to this?

    • @digital_magic
      @digital_magic  9 месяцев назад

      Sorry for that I can't give the answer cuz I need to see the image for that. did you solve the problem already?

    • @ourcyberheaven2467
      @ourcyberheaven2467 9 месяцев назад +1

      @@digital_magic No worries! I think its a matter of the checkpoint I used since I tried a different one and it works better. But I just saw you uploaded a new video with a better technique so I'm checking it out!!!

    • @digital_magic
      @digital_magic  9 месяцев назад

      @@ourcyberheaven2467 Great, i hope you like the new video 🙂

  • @planetmuskvlog3047
    @planetmuskvlog3047 10 месяцев назад +22

    I don’t know why people are having the Ai generate the background in every frame. Animators do not draw the background over and over. We have a BG plate the character animation is an overlay. Instant flicker-free

    • @digital_magic
      @digital_magic  10 месяцев назад +8

      Thanks for your comment I understand what you mean. I have created many videos for kids years ago and I filmed myself on a green screen. so I always worked with the background plate. I probably should have done it with this video as well but as I'm currently suffering from an autoimmune illness I can only work for 2 hours per day on the computer cuz I have inflammations on both elbows and shoulders. so I didn't do it cuz it was saving me time and creating the tutorial I hope you understand and here is the link to my old Channel where I created the kids videos:
      www.youtube.com/@ZupalandFunLearn

    • @planetmuskvlog3047
      @planetmuskvlog3047 10 месяцев назад +1

      @@digital_magic thanks for the reply and best of luck!

    • @hdbfilmz7999
      @hdbfilmz7999 10 месяцев назад +2

      ​@digital_magic praying for your health. I hope you can recover from your ailments and continue making magic.

    • @digital_magic
      @digital_magic  10 месяцев назад

      @@hdbfilmz7999 Thanx 🙂

    • @Zfrancis87
      @Zfrancis87 10 месяцев назад

      Hang in there, it’s hard to find quality tutorial videos like yours . You are doing an amazing service . 🫡

  • @hemantsharma7986
    @hemantsharma7986 10 месяцев назад +1

    Can we also create dancing video using this? Will that be possible as it will have more motion compared to this?

    • @digital_magic
      @digital_magic  10 месяцев назад

      will be possible, but with more control net units, and front filmed videos i guess...

  • @Sasalektio
    @Sasalektio 10 месяцев назад +1

    can this method be used as an alternative for Harry Potter Balenciaga type videos or it can't be used to make a character speak? What do you think?

    • @digital_magic
      @digital_magic  10 месяцев назад

      yes it can, but much more work then with DID. I am working on a speaking person for the 2nd tutorial. I guess it will take about 10 days i am afraid, as i am suffering from a immune illnes at the moment, my joints and tendants in both elbows and shoulders are inflamated. I can only work maximal 2 hours on the computer per day at the moment.

    • @Sasalektio
      @Sasalektio 10 месяцев назад

      @@digital_magic Thank you! Hope you recover soon mate.

  • @ShadowangelZero
    @ShadowangelZero 9 месяцев назад +1

    hello, nice tutorial :)
    I wanted to ask you something. why if I insert an image in grid format, the gray is totally ignored and the output is one big image instead of 4? am I doing something wrong?

    • @digital_magic
      @digital_magic  9 месяцев назад +1

      I haven't had issues like this but have you inserted the grid also into the control net models?

    • @AnatoliyO
      @AnatoliyO 9 месяцев назад +1

      running into the same problem here. the grid is not recognized by SD and a generated image always comes out as one. Also, when choosing preprocessor "linear_realistic" there are no models to choose from. Not sure of these two things are connected. Anyone had same issue?

    • @digital_magic
      @digital_magic  9 месяцев назад

      @@AnatoliyO This is the one you should see in the model: control_v11p_sd15_lineart [43d4be0d]
      if it is not there then you should probably update control net extension and stable diffusion

    • @digital_magic
      @digital_magic  9 месяцев назад

      @@AnatoliyO Is the grid thing already working or are you still getting a generated image as one?

    • @AnatoliyO
      @AnatoliyO 9 месяцев назад +1

      @@digital_magic after installing additional model for controlnet I'm abel to choose a model for the lineart and now the grid is working. Thank you for you video!

  • @hamzaagens
    @hamzaagens 10 месяцев назад +1

    Awesome tutorial so far! Can't wait to see part 2. ps. might wanna change the typo in your description that calls the og creator of this method "Tokyojap" instead of "Tokyojab" because...well yeah. You know. or I would hope you do. Cheers!

    • @digital_magic
      @digital_magic  10 месяцев назад

      I am glad you liked it :-)I am working on the 2nd tutorial now. I guess it will take 2 weeks i am afraid, as i am suffering from a immune illnes at the moment, my joints and tendants in both elbows and shoulders are inflamated. I can only work maximal 2 hours on the computer per day at the moment.

    • @digital_magic
      @digital_magic  10 месяцев назад

      and thanx for the typo thing, i changed it now 🙂

  • @lilillllii246
    @lilillllii246 6 месяцев назад +1

    Is there any way to apply the same outfit to the video in a natural way?

    • @digital_magic
      @digital_magic  6 месяцев назад

      I am not sure, maybe use masking techniques in your editing software?

  • @WallyMahar
    @WallyMahar 11 месяцев назад +2

    I hope to knock the universe over and do some scenes in the book that aren’t in the movie from ‘the exorcist’. There is some great scenes I would love to bring to life ..and in the audiobook the author does the dialogue perfectly….

    • @digital_magic
      @digital_magic  11 месяцев назад

      Sounds very interesting, please send me a link if you have created it

    • @planetmuskvlog3047
      @planetmuskvlog3047 10 месяцев назад

      Stable Diffusion for sure. Others have “horror” bans and flag anything creepy

  • @watchlater8755
    @watchlater8755 9 месяцев назад

    my "lineart realistic" preprocessor is ther, but my "lineart" model doesnt appear in the dropdown menu even tho its installed. what do i do

    • @digital_magic
      @digital_magic  9 месяцев назад

      Hey there have you solved the problem already?
      After choosing line art realistic as the pre-processor, then in the model you should choose this: control_v11p_sd15_lineart [43d4be0d]
      So There is not a special line art realistic model. I hope this help and I wish you a very nice day

    • @watchlater8755
      @watchlater8755 9 месяцев назад

      @@digital_magic when i choose line art realistic, all i see in the models are canny and canny inpaint for some reason

  • @bruhmoment3731
    @bruhmoment3731 10 месяцев назад

    3:00 this took digital blackface to a whole new level😂😂😂

    • @digital_magic
      @digital_magic  10 месяцев назад +1

      Thanx for your comment, i wasn't aware of it, that's why i apologized in the pinned comment . I am sorry if i have offended you. I am from the Netherlands and i didn't want to say black woman. I thought I was doing it right, but i clearly wasn't :-(

    • @bruhmoment3731
      @bruhmoment3731 10 месяцев назад +1

      @@digital_magic no no no no. It's not offensive by any means. I was just joking. Maybe some overly sensitive people would think that's offensive but I think it's perfectly fine because the black woman's appearance in the video is not a caricature or a mockery of black people. There's obviously no negative intention behind it. No need to apologize😄

    • @digital_magic
      @digital_magic  10 месяцев назад +1

      @@bruhmoment3731 Thanx for your kind comment :-) And that is exactly as it is, there is no negative intention behind it at all, i am glad you see this:-) Thanx my friend

    • @bruhmoment3731
      @bruhmoment3731 10 месяцев назад

      ​@@digital_magic You're very welcome. I love keeping up with the latest AI technology by watching videos like this so keep them coming :)

  • @DevillMusic
    @DevillMusic 10 месяцев назад +1

    Is there a reason why I drag my Output folders into Davinci Resolve and it doesn't play them as clips? They're all separate images.

    • @digital_magic
      @digital_magic  10 месяцев назад

      yes there is, you need to enable something, i see if i find a tutorial about ti to send you...

    • @digital_magic
      @digital_magic  10 месяцев назад

      here is the link: ruclips.net/video/9YH6vilGFD4/видео.html

  • @bruno_sanches
    @bruno_sanches 7 месяцев назад +2

    Is there any method to do this with an anthropomorphic animal? like a bear, shark etc?
    I tried different settings and I can't control the movements with the original frames... and it's also difficult to maintain consistency...

    • @digital_magic
      @digital_magic  7 месяцев назад +1

      Yes it is definitely possible to also do it with an animal, here is an example from Tokyojab: ruclips.net/video/3_fb2y9NrAE/видео.htmlsi=NgjCwPeqTmD7iVS8

    • @digital_magic
      @digital_magic  7 месяцев назад +1

      you can't use wild movement, you only can use suttle movements with this technique

    • @bruno_sanches
      @bruno_sanches 7 месяцев назад

      @@digital_magic So, I only managed to move from animal to animal, try to use a video of mine and transform it into a bear by doing basic movements like "moving head and mouth", just, it doesn't work?

  • @luannews
    @luannews 10 месяцев назад +1

    when i select lineart realistic nothing can be seen in the model dropdown.. have i missed something here?

    • @digital_magic
      @digital_magic  10 месяцев назад

      probably the model that you have to download from huggingface is not in the right folder i assume....

    • @digital_magic
      @digital_magic  10 месяцев назад

      maybe this article helps??
      github.com/Mikubill/sd-webui-controlnet/issues/548

  • @carlkenner4581
    @carlkenner4581 10 месяцев назад +1

    I've been working on this problem and making an extension to do something similar automatically. But I've been using other methods and it's not finished.
    BTW, notice that the "white" woman still has the wrong shape lips, nose, and forehead. That's the problem with asking it to follow the lines of the original image too closely.

    • @digital_magic
      @digital_magic  10 месяцев назад

      yeah i see what you mean. Please let me know your extension if you have finished it. Would love to see what you create.

  • @foxy2348
    @foxy2348 8 месяцев назад +1

    Eb Synth is soo limited to small movements though.... How can I get bigger range of movement?

    • @digital_magic
      @digital_magic  8 месяцев назад

      try deforum, like i did in my last 3 tutorials

  • @lucretius1111
    @lucretius1111 10 месяцев назад +1

    👍👍!!

  • @Nekotico
    @Nekotico 3 месяца назад +1

    Whats the name of the control net extension?

  • @natniszakov_
    @natniszakov_ 10 месяцев назад +1

    Hello there! And I hope you are doing better! I just did everything step by step but I'm getting single odd images and not 4 images as you are getting, do you have any idea of what can be happening here? thank you very much!

    • @digital_magic
      @digital_magic  10 месяцев назад

      did you add the grid also into controlnet and enabled it?

    • @natniszakov_
      @natniszakov_ 10 месяцев назад +1

      @@digital_magic oh, I don't think so! I'm trying to look into this but I can't seem to find it. Also I keep getting deformed people in the images, so strange! Also thank you very much for the fast answer, super appreciate it!

    • @digital_magic
      @digital_magic  10 месяцев назад +1

      @@natniszakov_ Hey there did you find the solution already. are you working in text to image or image to image? And are you using exactly the same prompts as I do? and are you using the same model that I used?

    • @natniszakov_
      @natniszakov_ 10 месяцев назад +1

      @@digital_magic Thank you so much again for your response! I'm doing everything exactly as you do in the video, this is why I can't figure out what's going on. The only different thing is that I'm using a Mac M1, do you think this can affect this whole process? otherwise I'll just keep trying! Thank you very much again for the concern!

    • @natniszakov_
      @natniszakov_ 10 месяцев назад

      @@digital_magic Also No, I'm using the same prompts but in my own image, the image is following the processs you explained in the first part of the video

  • @blackterminal
    @blackterminal 8 месяцев назад +1

    Hopefully this gets less complicated ie with less swapping between apps. But this was interesting none the less.

    • @digital_magic
      @digital_magic  8 месяцев назад

      Yes by time I think it will all get less complicated, but that's always with new techniques in the beginning it is very hard. but my goal is to create tutorials which are easier to follow in the future but I'm also depending on how the software develops

  • @philzan3627
    @philzan3627 6 месяцев назад +1

    I tried a few variations but I cannot get this to work at all. The 4x4 grid never outputs 4 frames, but a chimera of sorts with everything mixed in together. I tried with different checkpoints and VAEs but it only made it worst.

    • @digital_magic
      @digital_magic  6 месяцев назад

      i am sorry to hear 😞 Have you tried updating SD and Controlnet? This works often for me, if something doesn't work

    • @philzan3627
      @philzan3627 6 месяцев назад

      @@digital_magic I got A1111 v1.6, controlnets 1.1.415;
      I did a certain test, and I can make it work with that VAE and model however when I tried with a different VAE or model it failed.
      I am working with a XYZ plot to check what the cause may be: either the VAE, the checkpoint or something else.

    • @philzan3627
      @philzan3627 6 месяцев назад

      @@digital_magic so i did an xyz plot using various VAEs and checkpoints. NOW it works somehow... because of course it would! The detailing however is very difficult to change. I tried a walking animation and I cannot turn a stick figure into a person of proportion unfortunately. That or the details are missed.
      It's an interesting technique and to be fair it makes sense. However, your best bet would be to create a LoRA that generates these frames and then use than in EbSynth.

  • @AIExplor.e
    @AIExplor.e 9 месяцев назад +1

    Hopefully someone will answer to my question.
    I have installed all this on MacBook M1, all good for now
    I am doing the same thing with the controlnet, exactly, but when I press generate it doesn't generate the 4images that ive been selected but a random one. Its like my Controll net doesn't work and I don't know what to do
    please help!

    • @digital_magic
      @digital_magic  9 месяцев назад +1

      i think you have to update control net and Download the latest models. Here is the link:huggingface.co/lllyasviel/ControlNet-v1-1/tree/main

    • @digital_magic
      @digital_magic  9 месяцев назад +1

      I have had many other comments where people had the same problem and this helped for them in the end I hope it will help for you as well I wish you a very nice day

    • @AIExplor.e
      @AIExplor.e 9 месяцев назад +1

      Thanks for answering. The problem was solved. I didn’t had the model file next to Preprocessor. I downloaded it works.
      But, now there’s another problem
      I have a MacBook Air m1, it’s generating slowly, when I try to put 1024/1024 at the end , when I’m looking at the terminal it says “ MPS backed out of memory( mps allocated 5.32 GB, other allocations 3.02GB, max allowed 9.07GB) Use PyTorch_MPS_HIGH_WATERMARK_Ratio_=0.0 to disable upper limit for memory allocations

    • @digital_magic
      @digital_magic  9 месяцев назад

      @@AIExplor.e sorry i work on a pc

  • @envoy9b9
    @envoy9b9 10 месяцев назад +1

    @3:48 how do i add saver?

    • @digital_magic
      @digital_magic  10 месяцев назад +1

      press: shift + spacebar and type saver in the search box that pops up

  • @SethBrundleDirtyFly
    @SethBrundleDirtyFly 10 месяцев назад +2

    ebsynth makes my frames look terrible, where in your example it messed up maybe 1 or 2 frames, mine ended up with most of the video being unusable. Alot of trails and bad encoding but for many frames.

    • @digital_magic
      @digital_magic  10 месяцев назад

      If there is to much motion in the video, this technique has it's limitations unfortunately. Could you send me a link to your video, so i could have a look??

    • @RikkTheGaijin
      @RikkTheGaijin 10 месяцев назад

      same, My face rotates and half of the face is all garbage.

  • @Shingo_AI_Art
    @Shingo_AI_Art 10 месяцев назад +1

    So i tried on a upper body dancing girl video and i have this ugly mess between my keyframes, does it mean i have to make more keyframes ? I have 425 frames in total

    • @digital_magic
      @digital_magic  10 месяцев назад +1

      Yeah this could mean that you need to make more keyframes but you also got to realize that this technique won't be able to transform every shot. if there's too much Motion in there it won't work. I would love to see some work that you create, and if you could show me your result then maybe I could give an opinion about it if your shortest able to do it or not. wish you a very nice day

    • @Shingo_AI_Art
      @Shingo_AI_Art 10 месяцев назад +1

      @@digital_magic Actually I retried with a few more keyframes and lowered the video framerate (from 30 to 18) to make it easier but that just doesn't make it, you're right there is way too much motion. I will try with the temporal kit method and post the results when it's done

    • @digital_magic
      @digital_magic  10 месяцев назад

      @@Shingo_AI_Art yes would love to see it

    • @digital_magic
      @digital_magic  10 месяцев назад +1

      @@Shingo_AI_Art I am also going to try to create photo realistic videos with the deforum Technique I used in my latest tutorial video. I'm very curious how far we can push this

    • @Shingo_AI_Art
      @Shingo_AI_Art 10 месяцев назад

      @@digital_magic I'm sorry i can't post the link here it keeps getting deleted by the youtube algorithm even if i replace the dots by other characters, i provided a link to my IG in the "about" section of my YT channel though, it's the last publication

  • @lstephen
    @lstephen 10 месяцев назад +1

    can I do this with graviti webui site?

    • @digital_magic
      @digital_magic  10 месяцев назад +1

      sorry i don't know that program

  • @darkbilo2194
    @darkbilo2194 10 месяцев назад +2

    Neeeeeeeeeeeeeeeeeeddddd second parttttttttttttttttttttttttttttttttttttttttttttttttttttttt😊😊😊😊😊

    • @digital_magic
      @digital_magic  10 месяцев назад +1

      I am glad you liked it :-)I am working on the 2nd tutorial now. I guess it will take about 5-7 days, i am afraid, as i am suffering from a immune illnes at the moment, my joints and tendants in both elbows and shoulders are inflamated. I can only work maximal 2 hours on the computer per day at the moment.

    • @darkbilo2194
      @darkbilo2194 10 месяцев назад +1

      @@digital_magic sad to hear

    • @digital_magic
      @digital_magic  10 месяцев назад

      @@darkbilo2194 My homeopathic doctor thinks that I will be healthy in 5 months again so there's hope for the future

  • @Infiniteinsights0
    @Infiniteinsights0 Месяц назад

    My generation keep coming up with wrong hair color. Could you show how to change hair color in daVinci resolve

  • @TheidiotAmongUs
    @TheidiotAmongUs 10 месяцев назад +1

    What tool did you use for deapfake?

    • @digital_magic
      @digital_magic  10 месяцев назад +1

      swapface, there is a tutorial on my channel, the one with the 1-click thumbnail

  • @funnygame636
    @funnygame636 10 месяцев назад +1

    excuse me,with this methode how to generate more than 4 frame?

    • @digital_magic
      @digital_magic  10 месяцев назад

      drag 9 or 16 frames in the tool i use in the video. I will show this in the 2nd tutorial

  • @bruno_sanches
    @bruno_sanches 7 месяцев назад +1

    So, I only managed to move from animal to animal, try to use a video of mine and transform it into a bear by doing basic movements like "moving head and mouth", just, it doesn't work?

  • @nightmisterio
    @nightmisterio 10 месяцев назад +1

    Does it work right on 16:9???

  • @TeddyLeppard
    @TeddyLeppard 10 месяцев назад +3

    Give it a few months and a company like Adobe will copy and automate this process to make it ridiculously easy to use for a creator. All of this could be made far, far more intuitive and user friendly.

    • @digital_magic
      @digital_magic  10 месяцев назад +2

      for sure, it is just waiting for it, i totally agree with you, but for now it is great to play a long with what is possible for now 🙂

  • @timzheart8659
    @timzheart8659 9 месяцев назад

    When I try to render the grid in stable difusion, I only get 1 or 2 person.. anybody know the problem maybe?

    • @digital_magic
      @digital_magic  9 месяцев назад

      Sorry I've never had this maybe you should try to update stable diffusion and see if this helps.

  • @AIExplor.e
    @AIExplor.e 9 месяцев назад +1

    And also, what’s inside the files OUTPUT 1-4 ?

    • @digital_magic
      @digital_magic  9 месяцев назад

      i dont understand what you mean?

    • @AIExplor.e
      @AIExplor.e 9 месяцев назад

      @@digital_magic I don’t have those. I mean, for the absynth.
      Folder output 1
      Output 2
      Output 3
      Output 4

    • @digital_magic
      @digital_magic  9 месяцев назад

      @@AIExplor.e i have created these folders myself, so i can direct the outputs in there in ebsynth.

  • @shizool2359
    @shizool2359 3 месяца назад +1

    hello, anyone can help.how to add saver?

    • @digital_magic
      @digital_magic  3 месяца назад +1

      where in davinci resolve you mean? Hold shift and hit spacebar, then a search box for tools pops up, then type saver in the search box

    • @shizool2359
      @shizool2359 3 месяца назад

      @@digital_magic thanks bro.

  • @uversev
    @uversev 10 месяцев назад +2

    Part 2 When ?

    • @digital_magic
      @digital_magic  10 месяцев назад

      I am glad you liked it :-)I am working on the 2nd tutorial now. I guess it will take 2 weeks i am afraid, as i am suffering from a immune illnes at the moment, my joints and tendants in both elbows and shoulders are inflamated. I can only work maximal 2 hours on the computer per day at the moment.

    • @uversev
      @uversev 10 месяцев назад

      @@digital_magic totally understable please take your time take care :)

  • @MikevomMars
    @MikevomMars 10 месяцев назад +1

    So this method is restricted to:
    1. Square video format only?
    2. Four different input frames only? 🤨

    • @digital_magic
      @digital_magic  10 месяцев назад +1

      No not at all you could also use a 512 x 1024 resolution and you could add up to 16 or maybe even more images in the grid. in the second tutorial I made about this I show how this works. hope this helps wish you a nice day

    • @MikevomMars
      @MikevomMars 10 месяцев назад

      @@digital_magic Will try this, thanks for the hint 👍

  • @MeesterGgaming
    @MeesterGgaming 10 месяцев назад

    this video deserves millions of views in my opinion. I had the issues you described, i love gen1/2 but the quality is not there yet.
    This thing of tokyo jab seems amazing 🥰. I think the part where you reverse the black into white woman will make people happy that want to identify themselfs like .. ok iam joking. Great video!

    • @digital_magic
      @digital_magic  10 месяцев назад

      Thanks for your great comment I really appreciate it and I am very happy that you loved the video. Tokyojab is a true genius. he is also very helpful in learning me his method I owe him a lot. I also want to say that I'm sorry that I used the word colored woman. I wasn't aware that I could offend people with this that's why I apologized in the pinned comment. I am from the Netherlands and i didn't want to say black woman. I thought i was doing it right, but i clearly wasn't :-(

  • @chrisbraeuer9476
    @chrisbraeuer9476 10 месяцев назад +2

    Holy. I tried to follow this today. But with no knowlege about Davinci its more of a guessing game. Like what the heck is a saver and how to add one. Its not in the add tools tab for me. also not in the list. I dont have this search bar like you do.
    Found it. It was under I/O.
    Managed to extract the frames.

    • @digital_magic
      @digital_magic  10 месяцев назад +2

      shift spacebar to open the node tools. And great that everything is working now. Maybe send me a link of your result if you upload it somewhere? Would love to see results from viewers

    • @chrisbraeuer9476
      @chrisbraeuer9476 10 месяцев назад

      Unfortunally first try failed. 500 frames and 4 checkpoint images does not work.
      Second was better. But I did not find a way to make a video out of the images. Had it all in the timeline but did not find an option to export that as a video.
      More checkpoint images would be cool.
      I have seen this guy did it with hand separately too. There is something for that on civitai that gives out the normal maps for the hands.
      But learned a lot today. Thanks for the help.