Stable Diffusion + EbSynth (img2img)

Поделиться
HTML-код
  • Опубликовано: 22 май 2024
  • Experimenting with EbSynth and Stable Diffusion UI.
    HOW TO SUPPORT MY CHANNEL
    -Support me by joining my Patreon: / enigmatic_e
    _________________________________________________________________________
    SOCIAL MEDIA
    -Join my discord: / discord
    -Instagram: / enigmatic_e
    -Tik Tok: / enigmatic_e
    -Twitter: / 8bit_e
    _________________________________________________________________________
    EbSynth Website:
    ebsynth.com
    How to install Stable Diffusion tutorial:
    • Installing Stable Diff...
    Introduction to Ebsynth tutorial:
    • EbSynth Tutorial (AI A...
    MidJourney Style: mega.nz/folder/Z0xS1BpI#S40xU...
    Timecodes:
    00:00 Intro
    00:37 An example
    01:36 Getting Started
    02:16 Experimenting SD
    05:40 Import to Ebsynth
    08:06 Import to AE
    12:58 Other Results

Комментарии • 111

  • @rickardbengtsson
    @rickardbengtsson Год назад

    Cool experiment!

  • @Pinocchio20
    @Pinocchio20 Год назад

    You are awesome! Thank you!

  • @Mangazimusic
    @Mangazimusic Год назад +50

    Whenever stable diffusion learns to consistently re-interpret multiple frames of a dynamic video in the same style, it will be the most fundamental upset in the VFX industry this decade, potentially of the century.

    • @enigmatic_e
      @enigmatic_e  Год назад +1

      Facts

    • @AbjectPermanence
      @AbjectPermanence Год назад +8

      It's already possible to get some consistency. For example, stabilize the footage for the subject's face, then lock your seed in SD, and you can get consistent results from different frames. Corridor Crew did it in a recent "Into the Spider verse" video where they used SD and EbSynth to mimic the style of that movie.
      Using strong prompts along with training a model well would give consistency too. Same idea as how deepfakes work, just forcefeed it enough reference data for the AI to "understand" what you want. If your training data is good enough, and your prompt is really specific, the AI won't be able to produce much of anything except what you want.

    • @enigmatic_e
      @enigmatic_e  Год назад +1

      @@AbjectPermanence yup i have some examples of that in my newer videos, if you‘d like to check those out.

    • @ferencbalint4876
      @ferencbalint4876 Год назад

      @@AbjectPermanence hey, could i ask a few questions man?

  • @HopsinThaGoat
    @HopsinThaGoat Год назад

    the video everyone wanted

  • @DanielvanHauten
    @DanielvanHauten Год назад +2

    Awesome series of videos you're making, exactly what I was looking for.

  • @AaronLevitz
    @AaronLevitz Год назад +15

    Okay… what if we took a frame or two from this render back into Stable Diffusion, and used Inpainting to repair parts of his face? Then, re-render in EbSynth with that new information?

    • @enigmatic_e
      @enigmatic_e  Год назад +2

      Mmm interesting, may have to test this!

    • @temporallabsol9531
      @temporallabsol9531 Год назад +2

      Manual repair is the answer but it's barely even manual with all the tools we have available now

  • @ZeroIQ2
    @ZeroIQ2 Год назад

    I think you're doing some really cool things and it's great seeing what you've learnt. Keep up the great work!

  • @hharcont
    @hharcont Год назад

    I was just looking for this information, amazing stuff!

  • @thewelshninja
    @thewelshninja Год назад

    good job

  • @ConspireOfficial
    @ConspireOfficial Год назад

    yeah I watch nerdy rodent too
    I gotta start applying this technique, it would be perfect my Star Vox channel

  • @aabdelghafour2684
    @aabdelghafour2684 Год назад

    Thanks man helpful video, keep on

  • @huyked
    @huyked Год назад

    8:55 God damn, the stuff of nightmares. Perfect for this month of October.

  • @AbjectPermanence
    @AbjectPermanence Год назад +8

    You need better choice of keyframes to get the full use out of EbSynth. When the keyframe shows a man with his mouth closed, and then he opens his mouth in a later frame to show his teeth, the output animation as he smiles is going to look bad and need more keyframes. This can be difficult with Stable Diffusion. But if your keyframe shows the smile, the resulting animation will show the smile correctly and also work on frames where the mouth is closed.
    Ultimately, this is because of EbSynth's weakness with occlusions. If x isn't visible in the keyframe, EbSynth will definitely glitch when x suddenly appears. When it comes to animating faces, keyframes should be carefully chosen so that both the eyes and mouth are as open as possible. If the geometry in your keyframe matches the original well, and you choose your keyframe well too, the animation will even be able to talk and blink with just one keyframe.

    • @enigmatic_e
      @enigmatic_e  Год назад +1

      Yeah definitely need more keyframes. I need to go back to this someday, after making this I discovered ways to get more consistent styles in SD by locking and tracking the head. I have some examples in my shorts if you wanna check it out.

  • @virtual_intel
    @virtual_intel Год назад +4

    It’s a pixel for pixel type swap you need to do to get Ebsynth to work accurately. So if you can match every pixel with a mask filter for instance. You’ll improve your results 10 fold.

  • @virtual_intel
    @virtual_intel Год назад +1

    I pulled it off on a few of my Ebsynth vids, but haven’t launched them yet. Been holding off for a much larger project worthy of uploading to RUclips.

  • @ThePlaceForThings
    @ThePlaceForThings Год назад

    awesome channel, learning a ton. I noticed you inputed a video with a guy smiling and it struggled. I wonder if adding "smiling" or "teeth" to the negative prompt section would help.

    • @enigmatic_e
      @enigmatic_e  Год назад

      I think what would help is what I recently discovered and talked about in my new video and that‘s face tracking. It seems to give way better results.

  • @MarcusHiggs
    @MarcusHiggs Год назад +2

    Two possible solutions. 1. Make your keyframe the smile frame. Then use ebsynth to generate the frames 'down' and 'up' from the middle. You can't add information that is introduced, ie the teeth, but you can take it away. OR 2. Make a second keyframe at the point where you add new information, ie the teeth. You said this in the video. To keep the same style, do a faceswap in photoshop using the warping tool and blending layers. Then add teeth as well. It's a longer work around, but something can be done.
    Hope these ideas help. I know for sure the first one would.

    • @Mangazimusic
      @Mangazimusic Год назад

      Yeah, sure that would improve this particular video, but the reality is we're only going to have to make truly impressive and game-changing collaborative AI animations once we crack consistent re-interpretations of moving images.

  • @gloxmusic74
    @gloxmusic74 Год назад +2

    If stable diffusion kept the consistency you wouldnt even need ebsynth at all, one day maybe 🤔

  • @ESTUDIONOFI
    @ESTUDIONOFI Год назад

    I love your content. It's very useful. Thank you! I think there must be a way to create more than just one reference so that Ebsynth works better. Maybe you could use three frames from the video and try to create a consistent look in stable diffusion bya combining them and the treat them in photoshop.

    • @enigmatic_e
      @enigmatic_e  Год назад +1

      Yea maybe bringing it into photoshop or editing it some other might work

  • @NoahProse
    @NoahProse Год назад

    insane

  • @anthonymcgrath
    @anthonymcgrath Год назад

    new Aphex Twin video confirmed

  • @davidcapuzzo
    @davidcapuzzo Год назад

    Use lockdown instead you'll have much better results

  • @roycho87
    @roycho87 Год назад

    in order to get better results you should consider using controlnet and stricter prompts.

  • @Red.Rabbit.Resistance
    @Red.Rabbit.Resistance Год назад

    hahahah pro fo sho brother, i wouldnt mind collabing with you sometime on a big project i have in the oven.
    Some great tips with ebsynth are to take key frames with the as many face actions as possible. so one with the eyes and mouth open, and then one with everything closed.
    if gives the AI enough information, also try to key frame wide shots that zoom in. you can edit pan-out shots in reverse to avoid those wonky blobs 🔥🔥, you can also lock the camera on the persons eyes so the shake is gone

  • @Acewolf2000
    @Acewolf2000 Год назад +2

    take the the original output image into Photoshop and give it a smile then run that smile version though EbSyth to get your smile video, then half way through the video swap out the videos and blend. 👍

  • @mrpixelgrapher
    @mrpixelgrapher Год назад

    you can literally puppet wrap the frame in photoshop to create the inbetween frame and then feed it to ensynth as a keyframe

  • @Stonefactor
    @Stonefactor Год назад

    If it's it's just a few frames that need to be fixed you can use Photoshop.

  • @The_one_and_only_Carpool
    @The_one_and_only_Carpool Год назад

    visions of chaos has machine learning it has style transfer you can apply to video but it works better than ebsynth

    • @enigmatic_e
      @enigmatic_e  Год назад +1

      Some others have mentioned vision of chaos. I will have to check it out.

    • @The_one_and_only_Carpool
      @The_one_and_only_Carpool Год назад

      @@enigmatic_e i take it back it just transfers the style ebsynth does something else yeah visions of chaos has alot of fun stuff in it and its free like deep dream gptneo music to much to mention

  • @titlecc
    @titlecc Год назад +1

    Hello, I like your teaching very much. I hope you can add Chinese subtitles later

  • @RobertJene
    @RobertJene Год назад

    yeah, stable diffusion needs to not JUST be able to use the same seed again, but use the same generation again

  • @VivRean
    @VivRean Год назад

    Check out the robo model on hugging face. For tweaks use in painting

  • @scavengers4205
    @scavengers4205 Год назад +2

    Yeah man, the only limitation about stable diffusion is about getting similar result. If they can get similar result style, idk what going to happen hahaha btw, nice videos man!

  • @AaronLevitz
    @AaronLevitz Год назад

    And then, we just need a plausible explanation to handle the breaking. I’m thinking a layer of thin smoke is probably sufficient. Maybe some sparks. Just enough to say “the camera is capturing heat waves”.

  • @Qubot
    @Qubot Год назад +4

    Thanks for the tips.
    If you try to put 2 frames on the same 1024x512 image, generally it duplicate the styles of both faces, have you tryed if you have better result ?

    • @enigmatic_e
      @enigmatic_e  Год назад +3

      Wait, thats a thing? You mean if you put both frames together and run it through SD, it will give the same style to the faces?

    • @grae_n
      @grae_n Год назад

      @@enigmatic_e I can confirm this style consistency works.

    • @Qubot
      @Qubot Год назад

      @@enigmatic_e Every time here is 2 faces of a person i prompted in my generations they are identical.
      Maybe it work too with robot, just a suposition.

    • @enigmatic_e
      @enigmatic_e  Год назад +3

      @@Qubot i gotta try this! Thanks for the tip. If it works ill def do a new vid and give you credit for it.

    • @Qubot
      @Qubot Год назад

      @@enigmatic_e 🤞

  • @jeremyallemand8288
    @jeremyallemand8288 Год назад

    For the smiling. Why you dont take your creepy face picture, go to inpanting in stable diffusion and change only the mouth for a smile ?
    Like that you wont lose the creepy face and allow you to get a new key frame.

  • @sylviosandino8478
    @sylviosandino8478 Год назад

    are all the steps here or is some missing in the edit? I followed all the steps, prompts, negative prompts, settings and even the same image and it gives me completely different results...i dont get it

  • @somefan6763
    @somefan6763 Год назад

    There is a typo in the thumbnail (Ebstynth)

  • @iamYork_
    @iamYork_ Год назад

    Great vid but not much of a fan of Ebsynth... I used it in the past as a time saver [pre-AI] to animate over footage... but the final results were blah... Maybe i will have to try it with SD now that you made this tutorial... Good work as always my friend...

  • @ari2221
    @ari2221 Год назад +2

    I think AE has some tools you should use to prepair the footage before sending it to EBsynth. Face tracking and covering the eyes in the Original footage?

  • @julx97
    @julx97 Год назад +2

    What about stabilizing the footage first?

  • @pelomundo2080
    @pelomundo2080 Год назад

    help me. is giving this error that I do?
    AssertionError: Couldn't find Stable Diffusion in any of:

  • @theblockfilms2080
    @theblockfilms2080 Год назад +2

    Midjourney v4 does img2img now, so it should get pretty interesting. You can even get a base img2img photo from SD and then use that as an image reference and then make it way better in midjourney with the same positioning

    • @enigmatic_e
      @enigmatic_e  Год назад +1

      Woah thats sick!

    • @Lintbrain
      @Lintbrain Год назад

      Do you know how to keep the positioning? I've tried v4 and it moved my head in a completely different spot

  • @musicalsleep5288
    @musicalsleep5288 Год назад

    if you use img2img or inpainting that should help

  • @adel57100
    @adel57100 Год назад

    Seems that it's pretty much the same process than the one used by Scott Lightiser that you mentionned in another video. Perhaps it would be a good idea to mention him again for this video?

    • @enigmatic_e
      @enigmatic_e  Год назад

      The concept is the same but he never really shared his process with settings and how and why he shot footage the way he did, at least not that I’ve seen. Thats why I never thought about it but you’re right I should have least mentioned him.

  • @AIPixelFusion
    @AIPixelFusion Год назад

    Did you also use EbSynth for the intro? That one looked pretty good when talking

    • @enigmatic_e
      @enigmatic_e  Год назад +1

      Yea i did!

    • @AIPixelFusion
      @AIPixelFusion Год назад

      @@enigmatic_e ooh nice! that one looks dope. you didn't have the same problem with the mouth? Does the mouth look better because it's a simpler style?

    • @enigmatic_e
      @enigmatic_e  Год назад +1

      @@AIPixelFusion it looks better because im not making a lot of moment besides moving my mouth. It tracks mouth moment, like talking, quite well.

  • @5minutehub
    @5minutehub Год назад

    How can I keep the images I replace consistent?

  • @oboy9090
    @oboy9090 Год назад

    Is there a way to keep the final video smoother when there are "faster movements"?
    The video I am animating is a video of a guy singing and moving back and forth. The problem is that when he dances the rendering looks blurry/smudged.
    What can be done?
    I only use 1 frame to animate the final video is 1500 frames.

    • @enigmatic_e
      @enigmatic_e  Год назад

      Might need to use some of the methods of my newer videos. Head tracking helps with keeping consistent style. If you can get a few frames with very similar styles and look you can make even a faster video look quite good.

  • @hatuey6326
    @hatuey6326 Год назад +1

    At 10:00 did you try on SD to use the same seed and put very low the denoising strengt ?

    • @enigmatic_e
      @enigmatic_e  Год назад

      I did try that, it didn‘t look like the first frame unfortunately.

  • @SaintMatthieuSimard
    @SaintMatthieuSimard Год назад

    Hi, I'm trying to figure out how to load both robo diffusion and pruned model together but checkpoint merger doesnt work out the box. What's the proper way to combine checkpoints' libraries?

  • @THEREDBANDIT35
    @THEREDBANDIT35 Год назад

    Aren’t you able to save styles within stable diffusion? I haven’t tried styles before but it sounds like it might help keep that original look

    • @enigmatic_e
      @enigmatic_e  Год назад

      I haven‘t tried it that way. I always thought it simply saved the settings. Does it apply the style to new images?

    • @minipuft
      @minipuft Год назад +3

      @@enigmatic_e it mostly just saves your prompt from what I've tried, but maybe, you could send the SD cyborg picture into inpainting and only edit the mouth into a smile? :)

    • @minipuft
      @minipuft Год назад

      w/ a low denoising scale and everything

    • @deadplthebadass21
      @deadplthebadass21 Год назад

      ​@@enigmatic_e I think the imperfect smile is creepy and could be perfect for a indie horror movie or something lol

  • @jayalterEgoz
    @jayalterEgoz Год назад

    I guess stable diffusion is unstable 😅

    • @enigmatic_e
      @enigmatic_e  Год назад

      That’s why controlnet now exist! 😏 ruclips.net/video/1SIAMGBrtWo/видео.html

  • @KulidaVadym
    @KulidaVadym Год назад

    Hi there) very useful video. Thanks man
    I want to ask you:
    Do you know how to get StableD locally for MacBook? thnks

    • @enigmatic_e
      @enigmatic_e  Год назад

      I saw someone in my comments mention something about OpenVino SD version they installed on their mac but im not sure how that works sorry.

  • @tylerlagreen
    @tylerlagreen Год назад

    Whats the link to download stable diffusion
    😊

  • @heythere6983
    @heythere6983 Год назад

    Would using a camera that can shoot with more frames make it smoother or would this make it more difficult since it has to process more?
    Would this also be the same issue if a person is moving around and walking etc?
    I’m wondering how I can use this tech for converting videos but I wonder if it’s limited to just very still images
    Thanks me for any help

    • @enigmatic_e
      @enigmatic_e  Год назад

      I discovered a possible solution. Face tracking helps keep consistency. I have a new video about it on my channel and I'm currently working on a new video showing other possibilities.

  • @daymianmejia5910
    @daymianmejia5910 Год назад

    anyone know how to make this run on an AMD GPU rather than NVDA? Currently having it processing things off of my CPU which isn't as ideal.

  • @matthewpaquette
    @matthewpaquette Год назад

    9:48 did you try keeping the same seed?

  • @lazerusmfh
    @lazerusmfh Год назад

    How does this compare to thin spline?

  • @KolTregaskes
    @KolTregaskes Год назад

    Instead of After Effects can the video work be done in a free app like Blender? That would be good to see if possible. 😃

    • @enigmatic_e
      @enigmatic_e  Год назад

      I’ll look into it!

    • @KolTregaskes
      @KolTregaskes Год назад

      @@enigmatic_e Oh cool, that would be amazing. :-D

  • @Bringidon
    @Bringidon Год назад

    is it not 'e flat synth'?.. just wondering.. not that it matters

    • @enigmatic_e
      @enigmatic_e  Год назад

      I’m not sure, If thats true, then I’ve been saying it wrong this whole time. 😂

    • @Bringidon
      @Bringidon Год назад

      @@enigmatic_e i dont know lol.. its just the way i read it.. thanks for the video btw ;)

  • @coryberthelot8067
    @coryberthelot8067 Год назад

    Can u make a video on how to get the midjourney checkpoint youre using?

    • @enigmatic_e
      @enigmatic_e  Год назад

      Link to checkpoint in description

  • @sumbodee3
    @sumbodee3 Год назад

    Just edit the image yourself, make next keyframes, just photoshop the mouth

  • @RemonBerkers
    @RemonBerkers Год назад

    Nice one, i actually did a similar thing on my page if you wanna check it out.

    • @enigmatic_e
      @enigmatic_e  Год назад

      Nice! Just checked it out. You made it look so good!

    • @RemonBerkers
      @RemonBerkers Год назад

      @@enigmatic_e thanks man, still lot to figure out with the settings though